00:00:00.001 Started by upstream project "autotest-per-patch" build number 132412 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:01.353 The recommended git tool is: git 00:00:01.354 using credential 00000000-0000-0000-0000-000000000002 00:00:01.356 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.372 Fetching changes from the remote Git repository 00:00:01.376 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.390 Using shallow fetch with depth 1 00:00:01.390 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.390 > git --version # timeout=10 00:00:01.403 > git --version # 'git version 2.39.2' 00:00:01.403 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.418 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.418 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.915 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.930 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.944 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.944 > git config core.sparsecheckout # timeout=10 00:00:06.957 > git read-tree -mu HEAD # timeout=10 00:00:06.975 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.999 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.000 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.102 [Pipeline] Start of Pipeline 00:00:07.116 [Pipeline] library 00:00:07.118 Loading library shm_lib@master 00:00:07.118 Library shm_lib@master is cached. Copying from home. 00:00:07.133 [Pipeline] node 00:00:22.134 Still waiting to schedule task 00:00:22.135 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:46.414 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest_2 00:05:46.416 [Pipeline] { 00:05:46.426 [Pipeline] catchError 00:05:46.428 [Pipeline] { 00:05:46.506 [Pipeline] wrap 00:05:46.516 [Pipeline] { 00:05:46.523 [Pipeline] stage 00:05:46.525 [Pipeline] { (Prologue) 00:05:46.542 [Pipeline] echo 00:05:46.543 Node: VM-host-SM4 00:05:46.549 [Pipeline] cleanWs 00:05:46.557 [WS-CLEANUP] Deleting project workspace... 00:05:46.557 [WS-CLEANUP] Deferred wipeout is used... 00:05:46.562 [WS-CLEANUP] done 00:05:46.749 [Pipeline] setCustomBuildProperty 00:05:46.859 [Pipeline] httpRequest 00:05:47.179 [Pipeline] echo 00:05:47.180 Sorcerer 10.211.164.20 is alive 00:05:47.188 [Pipeline] retry 00:05:47.190 [Pipeline] { 00:05:47.203 [Pipeline] httpRequest 00:05:47.206 HttpMethod: GET 00:05:47.207 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:47.207 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:47.208 Response Code: HTTP/1.1 200 OK 00:05:47.209 Success: Status code 200 is in the accepted range: 200,404 00:05:47.209 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:47.354 [Pipeline] } 00:05:47.372 [Pipeline] // retry 00:05:47.379 [Pipeline] sh 00:05:47.657 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:47.670 [Pipeline] httpRequest 00:05:47.974 [Pipeline] echo 00:05:47.976 Sorcerer 10.211.164.20 is alive 00:05:47.986 [Pipeline] retry 00:05:47.989 [Pipeline] { 00:05:48.004 [Pipeline] httpRequest 00:05:48.008 HttpMethod: GET 00:05:48.009 URL: http://10.211.164.20/packages/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:05:48.010 Sending request to url: http://10.211.164.20/packages/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:05:48.010 Response Code: HTTP/1.1 200 OK 00:05:48.011 Success: Status code 200 is in the accepted range: 200,404 00:05:48.011 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:05:50.284 [Pipeline] } 00:05:50.303 [Pipeline] // retry 00:05:50.312 [Pipeline] sh 00:05:50.592 + tar --no-same-owner -xf spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:05:53.890 [Pipeline] sh 00:05:54.171 + git -C spdk log --oneline -n5 00:05:54.171 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:05:54.172 ce2cd8dc9 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:05:54.172 2d31d77ac ut/bdev: Remove duplication with many stups among unit test files 00:05:54.172 4c87f1208 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:05:54.172 e9f1d748e accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:05:54.189 [Pipeline] writeFile 00:05:54.207 [Pipeline] sh 00:05:54.519 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:54.533 [Pipeline] sh 00:05:54.817 + cat autorun-spdk.conf 00:05:54.817 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:54.817 SPDK_TEST_NVME=1 00:05:54.817 SPDK_TEST_FTL=1 00:05:54.817 SPDK_TEST_ISAL=1 00:05:54.817 SPDK_RUN_ASAN=1 00:05:54.817 SPDK_RUN_UBSAN=1 00:05:54.817 SPDK_TEST_XNVME=1 00:05:54.817 SPDK_TEST_NVME_FDP=1 00:05:54.817 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:54.823 RUN_NIGHTLY=0 00:05:54.826 [Pipeline] } 00:05:54.840 [Pipeline] // stage 00:05:54.856 [Pipeline] stage 00:05:54.857 [Pipeline] { (Run VM) 00:05:54.869 [Pipeline] sh 00:05:55.149 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:55.149 + echo 'Start stage prepare_nvme.sh' 00:05:55.149 Start stage prepare_nvme.sh 00:05:55.149 + [[ -n 10 ]] 00:05:55.149 + disk_prefix=ex10 00:05:55.149 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:05:55.149 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:05:55.149 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:05:55.149 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:55.149 ++ SPDK_TEST_NVME=1 00:05:55.149 ++ SPDK_TEST_FTL=1 00:05:55.149 ++ SPDK_TEST_ISAL=1 00:05:55.149 ++ SPDK_RUN_ASAN=1 00:05:55.149 ++ SPDK_RUN_UBSAN=1 00:05:55.149 ++ SPDK_TEST_XNVME=1 00:05:55.149 ++ SPDK_TEST_NVME_FDP=1 00:05:55.149 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:55.149 ++ RUN_NIGHTLY=0 00:05:55.149 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:05:55.149 + nvme_files=() 00:05:55.149 + declare -A nvme_files 00:05:55.149 + backend_dir=/var/lib/libvirt/images/backends 00:05:55.149 + nvme_files['nvme.img']=5G 00:05:55.149 + nvme_files['nvme-cmb.img']=5G 00:05:55.149 + nvme_files['nvme-multi0.img']=4G 00:05:55.149 + nvme_files['nvme-multi1.img']=4G 00:05:55.149 + nvme_files['nvme-multi2.img']=4G 00:05:55.149 + nvme_files['nvme-openstack.img']=8G 00:05:55.149 + nvme_files['nvme-zns.img']=5G 00:05:55.149 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:55.149 + (( SPDK_TEST_FTL == 1 )) 00:05:55.149 + nvme_files["nvme-ftl.img"]=6G 00:05:55.149 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:55.149 + nvme_files["nvme-fdp.img"]=1G 00:05:55.149 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:55.149 + for nvme in "${!nvme_files[@]}" 00:05:55.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:05:55.149 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:55.149 + for nvme in "${!nvme_files[@]}" 00:05:55.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:05:55.149 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:55.149 + for nvme in "${!nvme_files[@]}" 00:05:55.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:05:55.149 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:55.149 + for nvme in "${!nvme_files[@]}" 00:05:55.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:05:55.149 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:55.149 + for nvme in "${!nvme_files[@]}" 00:05:55.149 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:05:55.407 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:55.407 + for nvme in "${!nvme_files[@]}" 00:05:55.407 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:05:55.407 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:55.407 + for nvme in "${!nvme_files[@]}" 00:05:55.407 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:05:55.407 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:55.407 + for nvme in "${!nvme_files[@]}" 00:05:55.407 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:05:55.407 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:55.407 + for nvme in "${!nvme_files[@]}" 00:05:55.407 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:05:55.665 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:55.665 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:05:55.665 + echo 'End stage prepare_nvme.sh' 00:05:55.665 End stage prepare_nvme.sh 00:05:55.677 [Pipeline] sh 00:05:55.957 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:55.957 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:56.214 00:05:56.214 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:05:56.214 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:05:56.214 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:05:56.214 HELP=0 00:05:56.214 DRY_RUN=0 00:05:56.214 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:05:56.214 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:56.214 NVME_AUTO_CREATE=0 00:05:56.214 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:05:56.214 NVME_CMB=,,,, 00:05:56.214 NVME_PMR=,,,, 00:05:56.214 NVME_ZNS=,,,, 00:05:56.214 NVME_MS=true,,,, 00:05:56.214 NVME_FDP=,,,on, 00:05:56.214 SPDK_VAGRANT_DISTRO=fedora39 00:05:56.214 SPDK_VAGRANT_VMCPU=10 00:05:56.214 SPDK_VAGRANT_VMRAM=12288 00:05:56.214 SPDK_VAGRANT_PROVIDER=libvirt 00:05:56.214 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:56.214 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:56.214 SPDK_OPENSTACK_NETWORK=0 00:05:56.214 VAGRANT_PACKAGE_BOX=0 00:05:56.214 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:05:56.214 FORCE_DISTRO=true 00:05:56.214 VAGRANT_BOX_VERSION= 00:05:56.214 EXTRA_VAGRANTFILES= 00:05:56.214 NIC_MODEL=e1000 00:05:56.214 00:05:56.214 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:05:56.214 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:06:00.400 Bringing machine 'default' up with 'libvirt' provider... 00:06:00.658 ==> default: Creating image (snapshot of base box volume). 00:06:00.658 ==> default: Creating domain with the following settings... 00:06:00.658 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732115866_98ee91142880d174eaaf 00:06:00.658 ==> default: -- Domain type: kvm 00:06:00.658 ==> default: -- Cpus: 10 00:06:00.658 ==> default: -- Feature: acpi 00:06:00.658 ==> default: -- Feature: apic 00:06:00.658 ==> default: -- Feature: pae 00:06:00.658 ==> default: -- Memory: 12288M 00:06:00.658 ==> default: -- Memory Backing: hugepages: 00:06:00.658 ==> default: -- Management MAC: 00:06:00.658 ==> default: -- Loader: 00:06:00.658 ==> default: -- Nvram: 00:06:00.658 ==> default: -- Base box: spdk/fedora39 00:06:00.658 ==> default: -- Storage pool: default 00:06:00.658 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732115866_98ee91142880d174eaaf.img (20G) 00:06:00.658 ==> default: -- Volume Cache: default 00:06:00.658 ==> default: -- Kernel: 00:06:00.658 ==> default: -- Initrd: 00:06:00.658 ==> default: -- Graphics Type: vnc 00:06:00.658 ==> default: -- Graphics Port: -1 00:06:00.658 ==> default: -- Graphics IP: 127.0.0.1 00:06:00.658 ==> default: -- Graphics Password: Not defined 00:06:00.658 ==> default: -- Video Type: cirrus 00:06:00.658 ==> default: -- Video VRAM: 9216 00:06:00.658 ==> default: -- Sound Type: 00:06:00.658 ==> default: -- Keymap: en-us 00:06:00.658 ==> default: -- TPM Path: 00:06:00.658 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:00.658 ==> default: -- Command line args: 00:06:00.658 ==> default: -> value=-device, 00:06:00.658 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:00.658 ==> default: -> value=-drive, 00:06:00.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:06:00.658 ==> default: -> value=-device, 00:06:00.658 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:06:00.658 ==> default: -> value=-device, 00:06:00.658 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:00.658 ==> default: -> value=-drive, 00:06:00.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:06:00.658 ==> default: -> value=-device, 00:06:00.658 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:06:00.659 ==> default: -> value=-drive, 00:06:00.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.659 ==> default: -> value=-drive, 00:06:00.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.659 ==> default: -> value=-drive, 00:06:00.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:06:00.659 ==> default: -> value=-drive, 00:06:00.659 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:06:00.659 ==> default: -> value=-device, 00:06:00.659 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:00.916 ==> default: Creating shared folders metadata... 00:06:00.916 ==> default: Starting domain. 00:06:02.832 ==> default: Waiting for domain to get an IP address... 00:06:17.761 ==> default: Waiting for SSH to become available... 00:06:19.221 ==> default: Configuring and enabling network interfaces... 00:06:24.499 default: SSH address: 192.168.121.201:22 00:06:24.499 default: SSH username: vagrant 00:06:24.499 default: SSH auth method: private key 00:06:26.401 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:34.520 ==> default: Mounting SSHFS shared folder... 00:06:35.896 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:36.155 ==> default: Checking Mount.. 00:06:37.531 ==> default: Folder Successfully Mounted! 00:06:37.531 ==> default: Running provisioner: file... 00:06:38.478 default: ~/.gitconfig => .gitconfig 00:06:38.739 00:06:38.739 SUCCESS! 00:06:38.739 00:06:38.739 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:06:38.739 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:38.739 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:06:38.739 00:06:38.748 [Pipeline] } 00:06:38.763 [Pipeline] // stage 00:06:38.772 [Pipeline] dir 00:06:38.773 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:06:38.775 [Pipeline] { 00:06:38.788 [Pipeline] catchError 00:06:38.789 [Pipeline] { 00:06:38.802 [Pipeline] sh 00:06:39.083 + vagrant ssh-config --host vagrant 00:06:39.083 + sed -ne /^Host/,$p 00:06:39.083 + tee ssh_conf 00:06:42.370 Host vagrant 00:06:42.370 HostName 192.168.121.201 00:06:42.370 User vagrant 00:06:42.370 Port 22 00:06:42.370 UserKnownHostsFile /dev/null 00:06:42.370 StrictHostKeyChecking no 00:06:42.370 PasswordAuthentication no 00:06:42.370 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:42.370 IdentitiesOnly yes 00:06:42.370 LogLevel FATAL 00:06:42.370 ForwardAgent yes 00:06:42.370 ForwardX11 yes 00:06:42.370 00:06:42.383 [Pipeline] withEnv 00:06:42.386 [Pipeline] { 00:06:42.400 [Pipeline] sh 00:06:42.743 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:42.743 source /etc/os-release 00:06:42.743 [[ -e /image.version ]] && img=$(< /image.version) 00:06:42.743 # Minimal, systemd-like check. 00:06:42.744 if [[ -e /.dockerenv ]]; then 00:06:42.744 # Clear garbage from the node's name: 00:06:42.744 # agt-er_autotest_547-896 -> autotest_547-896 00:06:42.744 # $HOSTNAME is the actual container id 00:06:42.744 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:42.744 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:42.744 # We can assume this is a mount from a host where container is running, 00:06:42.744 # so fetch its hostname to easily identify the target swarm worker. 00:06:42.744 container="$(< /etc/hostname) ($agent)" 00:06:42.744 else 00:06:42.744 # Fallback 00:06:42.744 container=$agent 00:06:42.744 fi 00:06:42.744 fi 00:06:42.744 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:42.744 00:06:42.755 [Pipeline] } 00:06:42.772 [Pipeline] // withEnv 00:06:42.781 [Pipeline] setCustomBuildProperty 00:06:42.799 [Pipeline] stage 00:06:42.801 [Pipeline] { (Tests) 00:06:42.819 [Pipeline] sh 00:06:43.129 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:43.144 [Pipeline] sh 00:06:43.440 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:43.721 [Pipeline] timeout 00:06:43.722 Timeout set to expire in 50 min 00:06:43.724 [Pipeline] { 00:06:43.739 [Pipeline] sh 00:06:44.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:44.584 HEAD is now at 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:06:44.598 [Pipeline] sh 00:06:44.878 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:45.150 [Pipeline] sh 00:06:45.431 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:45.705 [Pipeline] sh 00:06:45.984 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:46.242 ++ readlink -f spdk_repo 00:06:46.243 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:46.243 + [[ -n /home/vagrant/spdk_repo ]] 00:06:46.243 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:46.243 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:46.243 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:46.243 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:46.243 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:46.243 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:46.243 + cd /home/vagrant/spdk_repo 00:06:46.243 + source /etc/os-release 00:06:46.243 ++ NAME='Fedora Linux' 00:06:46.243 ++ VERSION='39 (Cloud Edition)' 00:06:46.243 ++ ID=fedora 00:06:46.243 ++ VERSION_ID=39 00:06:46.243 ++ VERSION_CODENAME= 00:06:46.243 ++ PLATFORM_ID=platform:f39 00:06:46.243 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:46.243 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:46.243 ++ LOGO=fedora-logo-icon 00:06:46.243 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:46.243 ++ HOME_URL=https://fedoraproject.org/ 00:06:46.243 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:46.243 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:46.243 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:46.243 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:46.243 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:46.243 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:46.243 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:46.243 ++ SUPPORT_END=2024-11-12 00:06:46.243 ++ VARIANT='Cloud Edition' 00:06:46.243 ++ VARIANT_ID=cloud 00:06:46.243 + uname -a 00:06:46.243 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:46.243 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:46.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.074 Hugepages 00:06:47.074 node hugesize free / total 00:06:47.074 node0 1048576kB 0 / 0 00:06:47.074 node0 2048kB 0 / 0 00:06:47.074 00:06:47.074 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:47.074 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:47.074 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:47.074 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:47.074 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:47.074 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:47.074 + rm -f /tmp/spdk-ld-path 00:06:47.074 + source autorun-spdk.conf 00:06:47.074 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:47.074 ++ SPDK_TEST_NVME=1 00:06:47.074 ++ SPDK_TEST_FTL=1 00:06:47.074 ++ SPDK_TEST_ISAL=1 00:06:47.074 ++ SPDK_RUN_ASAN=1 00:06:47.074 ++ SPDK_RUN_UBSAN=1 00:06:47.074 ++ SPDK_TEST_XNVME=1 00:06:47.074 ++ SPDK_TEST_NVME_FDP=1 00:06:47.074 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:47.074 ++ RUN_NIGHTLY=0 00:06:47.074 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:47.074 + [[ -n '' ]] 00:06:47.074 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:47.074 + for M in /var/spdk/build-*-manifest.txt 00:06:47.074 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:47.074 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:47.074 + for M in /var/spdk/build-*-manifest.txt 00:06:47.074 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:47.074 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:47.075 + for M in /var/spdk/build-*-manifest.txt 00:06:47.075 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:47.075 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:47.075 ++ uname 00:06:47.075 + [[ Linux == \L\i\n\u\x ]] 00:06:47.075 + sudo dmesg -T 00:06:47.075 + sudo dmesg --clear 00:06:47.075 + dmesg_pid=5307 00:06:47.075 + [[ Fedora Linux == FreeBSD ]] 00:06:47.075 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.075 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.075 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:47.075 + [[ -x /usr/src/fio-static/fio ]] 00:06:47.075 + sudo dmesg -Tw 00:06:47.075 + export FIO_BIN=/usr/src/fio-static/fio 00:06:47.075 + FIO_BIN=/usr/src/fio-static/fio 00:06:47.075 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:47.075 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:47.075 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:47.075 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.075 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.075 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:47.075 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.075 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.075 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:47.334 15:18:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:47.334 15:18:33 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:47.334 15:18:33 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:47.334 15:18:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:47.334 15:18:33 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:47.334 15:18:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:47.334 15:18:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.334 15:18:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:47.334 15:18:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:47.334 15:18:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.334 15:18:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.334 15:18:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.334 15:18:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.334 15:18:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.334 15:18:33 -- paths/export.sh@5 -- $ export PATH 00:06:47.334 15:18:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.334 15:18:33 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:47.334 15:18:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:47.334 15:18:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732115913.XXXXXX 00:06:47.334 15:18:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732115913.BS1m9o 00:06:47.334 15:18:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:47.334 15:18:33 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:47.334 15:18:33 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:47.334 15:18:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:47.334 15:18:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:47.334 15:18:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:47.334 15:18:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:47.334 15:18:33 -- common/autotest_common.sh@10 -- $ set +x 00:06:47.334 15:18:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:47.334 15:18:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:47.334 15:18:33 -- pm/common@17 -- $ local monitor 00:06:47.334 15:18:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.334 15:18:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.334 15:18:33 -- pm/common@25 -- $ sleep 1 00:06:47.334 15:18:33 -- pm/common@21 -- $ date +%s 00:06:47.334 15:18:33 -- pm/common@21 -- $ date +%s 00:06:47.334 15:18:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115913 00:06:47.334 15:18:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732115913 00:06:47.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115913_collect-vmstat.pm.log 00:06:47.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732115913_collect-cpu-load.pm.log 00:06:48.272 15:18:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:48.272 15:18:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:48.272 15:18:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:48.272 15:18:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:48.272 15:18:34 -- spdk/autobuild.sh@16 -- $ date -u 00:06:48.272 Wed Nov 20 03:18:34 PM UTC 2024 00:06:48.272 15:18:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:48.531 v25.01-pre-233-g7bc1aace1 00:06:48.531 15:18:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:48.531 15:18:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:48.531 15:18:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:48.531 15:18:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:48.531 15:18:34 -- common/autotest_common.sh@10 -- $ set +x 00:06:48.531 ************************************ 00:06:48.531 START TEST asan 00:06:48.531 ************************************ 00:06:48.531 using asan 00:06:48.531 15:18:34 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:48.531 00:06:48.531 real 0m0.000s 00:06:48.531 user 0m0.000s 00:06:48.531 sys 0m0.000s 00:06:48.531 15:18:34 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:48.531 ************************************ 00:06:48.531 END TEST asan 00:06:48.531 ************************************ 00:06:48.531 15:18:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:48.531 15:18:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:48.531 15:18:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:48.531 15:18:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:48.531 15:18:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:48.531 15:18:34 -- common/autotest_common.sh@10 -- $ set +x 00:06:48.531 ************************************ 00:06:48.531 START TEST ubsan 00:06:48.531 ************************************ 00:06:48.531 using ubsan 00:06:48.531 15:18:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:48.531 00:06:48.531 real 0m0.000s 00:06:48.531 user 0m0.000s 00:06:48.531 sys 0m0.000s 00:06:48.531 15:18:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:48.531 ************************************ 00:06:48.531 END TEST ubsan 00:06:48.531 15:18:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:48.531 ************************************ 00:06:48.531 15:18:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:48.531 15:18:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:48.531 15:18:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:48.531 15:18:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:48.531 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:48.531 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:49.099 Using 'verbs' RDMA provider 00:07:05.357 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:20.235 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:20.235 Creating mk/config.mk...done. 00:07:20.235 Creating mk/cc.flags.mk...done. 00:07:20.235 Type 'make' to build. 00:07:20.235 15:19:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:20.235 15:19:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:20.235 15:19:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:20.235 15:19:04 -- common/autotest_common.sh@10 -- $ set +x 00:07:20.235 ************************************ 00:07:20.235 START TEST make 00:07:20.235 ************************************ 00:07:20.235 15:19:04 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:20.235 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:07:20.235 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:07:20.235 meson setup builddir \ 00:07:20.235 -Dwith-libaio=enabled \ 00:07:20.235 -Dwith-liburing=enabled \ 00:07:20.235 -Dwith-libvfn=disabled \ 00:07:20.235 -Dwith-spdk=disabled \ 00:07:20.235 -Dexamples=false \ 00:07:20.235 -Dtests=false \ 00:07:20.235 -Dtools=false && \ 00:07:20.235 meson compile -C builddir && \ 00:07:20.235 cd -) 00:07:20.235 make[1]: Nothing to be done for 'all'. 00:07:22.146 The Meson build system 00:07:22.146 Version: 1.5.0 00:07:22.146 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:07:22.146 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:22.146 Build type: native build 00:07:22.146 Project name: xnvme 00:07:22.146 Project version: 0.7.5 00:07:22.146 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:22.146 C linker for the host machine: cc ld.bfd 2.40-14 00:07:22.146 Host machine cpu family: x86_64 00:07:22.146 Host machine cpu: x86_64 00:07:22.146 Message: host_machine.system: linux 00:07:22.146 Compiler for C supports arguments -Wno-missing-braces: YES 00:07:22.146 Compiler for C supports arguments -Wno-cast-function-type: YES 00:07:22.146 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:22.146 Run-time dependency threads found: YES 00:07:22.146 Has header "setupapi.h" : NO 00:07:22.146 Has header "linux/blkzoned.h" : YES 00:07:22.146 Has header "linux/blkzoned.h" : YES (cached) 00:07:22.146 Has header "libaio.h" : YES 00:07:22.146 Library aio found: YES 00:07:22.146 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:22.146 Run-time dependency liburing found: YES 2.2 00:07:22.146 Dependency libvfn skipped: feature with-libvfn disabled 00:07:22.146 Found CMake: /usr/bin/cmake (3.27.7) 00:07:22.146 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:07:22.146 Subproject spdk : skipped: feature with-spdk disabled 00:07:22.146 Run-time dependency appleframeworks found: NO (tried framework) 00:07:22.146 Run-time dependency appleframeworks found: NO (tried framework) 00:07:22.146 Library rt found: YES 00:07:22.146 Checking for function "clock_gettime" with dependency -lrt: YES 00:07:22.146 Configuring xnvme_config.h using configuration 00:07:22.146 Configuring xnvme.spec using configuration 00:07:22.146 Run-time dependency bash-completion found: YES 2.11 00:07:22.146 Message: Bash-completions: /usr/share/bash-completion/completions 00:07:22.146 Program cp found: YES (/usr/bin/cp) 00:07:22.146 Build targets in project: 3 00:07:22.146 00:07:22.146 xnvme 0.7.5 00:07:22.146 00:07:22.146 Subprojects 00:07:22.146 spdk : NO Feature 'with-spdk' disabled 00:07:22.146 00:07:22.146 User defined options 00:07:22.146 examples : false 00:07:22.146 tests : false 00:07:22.146 tools : false 00:07:22.146 with-libaio : enabled 00:07:22.146 with-liburing: enabled 00:07:22.146 with-libvfn : disabled 00:07:22.146 with-spdk : disabled 00:07:22.146 00:07:22.146 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:22.714 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:07:22.714 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:07:22.714 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:07:22.714 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:07:22.714 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:07:22.714 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:07:22.714 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:07:22.714 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:07:22.714 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:07:22.714 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:07:22.714 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:07:22.714 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:07:22.714 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:07:22.714 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:07:22.714 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:07:22.973 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:07:22.973 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:07:22.973 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:07:22.973 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:07:22.973 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:07:22.973 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:07:22.973 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:07:22.973 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:07:22.973 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:07:22.973 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:07:22.973 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:07:22.973 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:07:22.973 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:07:22.973 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:07:22.973 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:07:22.973 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:07:22.973 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:07:22.973 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:07:22.973 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:07:22.973 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:07:22.973 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:07:22.973 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:07:22.973 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:07:22.973 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:07:22.973 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:07:22.973 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:07:22.973 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:07:22.973 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:07:23.300 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:07:23.300 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:07:23.300 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:07:23.300 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:07:23.300 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:07:23.300 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:07:23.300 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:07:23.300 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:07:23.300 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:07:23.300 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:07:23.300 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:07:23.300 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:07:23.300 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:07:23.300 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:07:23.300 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:07:23.300 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:07:23.300 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:07:23.300 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:07:23.300 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:07:23.300 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:07:23.300 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:07:23.300 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:07:23.300 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:07:23.557 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:07:23.557 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:07:23.557 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:07:23.557 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:07:23.557 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:07:23.557 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:07:23.557 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:07:23.557 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:07:23.815 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:07:23.815 [75/76] Linking static target lib/libxnvme.a 00:07:23.815 [76/76] Linking target lib/libxnvme.so.0.7.5 00:07:23.815 INFO: autodetecting backend as ninja 00:07:23.815 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:24.072 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:32.186 The Meson build system 00:07:32.186 Version: 1.5.0 00:07:32.186 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:32.186 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:32.186 Build type: native build 00:07:32.186 Program cat found: YES (/usr/bin/cat) 00:07:32.186 Project name: DPDK 00:07:32.186 Project version: 24.03.0 00:07:32.186 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:32.186 C linker for the host machine: cc ld.bfd 2.40-14 00:07:32.186 Host machine cpu family: x86_64 00:07:32.186 Host machine cpu: x86_64 00:07:32.186 Message: ## Building in Developer Mode ## 00:07:32.186 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:32.186 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:32.186 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:32.186 Program python3 found: YES (/usr/bin/python3) 00:07:32.186 Program cat found: YES (/usr/bin/cat) 00:07:32.186 Compiler for C supports arguments -march=native: YES 00:07:32.186 Checking for size of "void *" : 8 00:07:32.186 Checking for size of "void *" : 8 (cached) 00:07:32.186 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:32.186 Library m found: YES 00:07:32.186 Library numa found: YES 00:07:32.186 Has header "numaif.h" : YES 00:07:32.186 Library fdt found: NO 00:07:32.186 Library execinfo found: NO 00:07:32.186 Has header "execinfo.h" : YES 00:07:32.186 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:32.186 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:32.186 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:32.186 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:32.186 Run-time dependency openssl found: YES 3.1.1 00:07:32.186 Run-time dependency libpcap found: YES 1.10.4 00:07:32.186 Has header "pcap.h" with dependency libpcap: YES 00:07:32.186 Compiler for C supports arguments -Wcast-qual: YES 00:07:32.186 Compiler for C supports arguments -Wdeprecated: YES 00:07:32.186 Compiler for C supports arguments -Wformat: YES 00:07:32.186 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:32.186 Compiler for C supports arguments -Wformat-security: NO 00:07:32.186 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:32.186 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:32.186 Compiler for C supports arguments -Wnested-externs: YES 00:07:32.186 Compiler for C supports arguments -Wold-style-definition: YES 00:07:32.186 Compiler for C supports arguments -Wpointer-arith: YES 00:07:32.186 Compiler for C supports arguments -Wsign-compare: YES 00:07:32.186 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:32.186 Compiler for C supports arguments -Wundef: YES 00:07:32.186 Compiler for C supports arguments -Wwrite-strings: YES 00:07:32.186 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:32.186 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:32.186 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:32.186 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:32.186 Program objdump found: YES (/usr/bin/objdump) 00:07:32.186 Compiler for C supports arguments -mavx512f: YES 00:07:32.186 Checking if "AVX512 checking" compiles: YES 00:07:32.186 Fetching value of define "__SSE4_2__" : 1 00:07:32.186 Fetching value of define "__AES__" : 1 00:07:32.186 Fetching value of define "__AVX__" : 1 00:07:32.186 Fetching value of define "__AVX2__" : 1 00:07:32.186 Fetching value of define "__AVX512BW__" : 1 00:07:32.186 Fetching value of define "__AVX512CD__" : 1 00:07:32.186 Fetching value of define "__AVX512DQ__" : 1 00:07:32.186 Fetching value of define "__AVX512F__" : 1 00:07:32.186 Fetching value of define "__AVX512VL__" : 1 00:07:32.186 Fetching value of define "__PCLMUL__" : 1 00:07:32.186 Fetching value of define "__RDRND__" : 1 00:07:32.186 Fetching value of define "__RDSEED__" : 1 00:07:32.186 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:32.186 Fetching value of define "__znver1__" : (undefined) 00:07:32.186 Fetching value of define "__znver2__" : (undefined) 00:07:32.186 Fetching value of define "__znver3__" : (undefined) 00:07:32.186 Fetching value of define "__znver4__" : (undefined) 00:07:32.186 Library asan found: YES 00:07:32.186 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:32.186 Message: lib/log: Defining dependency "log" 00:07:32.186 Message: lib/kvargs: Defining dependency "kvargs" 00:07:32.186 Message: lib/telemetry: Defining dependency "telemetry" 00:07:32.186 Library rt found: YES 00:07:32.186 Checking for function "getentropy" : NO 00:07:32.186 Message: lib/eal: Defining dependency "eal" 00:07:32.186 Message: lib/ring: Defining dependency "ring" 00:07:32.186 Message: lib/rcu: Defining dependency "rcu" 00:07:32.186 Message: lib/mempool: Defining dependency "mempool" 00:07:32.186 Message: lib/mbuf: Defining dependency "mbuf" 00:07:32.186 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:32.186 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:32.186 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:32.186 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:32.186 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:32.186 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:32.186 Compiler for C supports arguments -mpclmul: YES 00:07:32.186 Compiler for C supports arguments -maes: YES 00:07:32.186 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:32.186 Compiler for C supports arguments -mavx512bw: YES 00:07:32.186 Compiler for C supports arguments -mavx512dq: YES 00:07:32.187 Compiler for C supports arguments -mavx512vl: YES 00:07:32.187 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:32.187 Compiler for C supports arguments -mavx2: YES 00:07:32.187 Compiler for C supports arguments -mavx: YES 00:07:32.187 Message: lib/net: Defining dependency "net" 00:07:32.187 Message: lib/meter: Defining dependency "meter" 00:07:32.187 Message: lib/ethdev: Defining dependency "ethdev" 00:07:32.187 Message: lib/pci: Defining dependency "pci" 00:07:32.187 Message: lib/cmdline: Defining dependency "cmdline" 00:07:32.187 Message: lib/hash: Defining dependency "hash" 00:07:32.187 Message: lib/timer: Defining dependency "timer" 00:07:32.187 Message: lib/compressdev: Defining dependency "compressdev" 00:07:32.187 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:32.187 Message: lib/dmadev: Defining dependency "dmadev" 00:07:32.187 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:32.187 Message: lib/power: Defining dependency "power" 00:07:32.187 Message: lib/reorder: Defining dependency "reorder" 00:07:32.187 Message: lib/security: Defining dependency "security" 00:07:32.187 Has header "linux/userfaultfd.h" : YES 00:07:32.187 Has header "linux/vduse.h" : YES 00:07:32.187 Message: lib/vhost: Defining dependency "vhost" 00:07:32.187 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:32.187 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:32.187 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:32.187 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:32.187 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:32.187 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:32.187 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:32.187 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:32.187 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:32.187 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:32.187 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:32.187 Configuring doxy-api-html.conf using configuration 00:07:32.187 Configuring doxy-api-man.conf using configuration 00:07:32.187 Program mandb found: YES (/usr/bin/mandb) 00:07:32.187 Program sphinx-build found: NO 00:07:32.187 Configuring rte_build_config.h using configuration 00:07:32.187 Message: 00:07:32.187 ================= 00:07:32.187 Applications Enabled 00:07:32.187 ================= 00:07:32.187 00:07:32.187 apps: 00:07:32.187 00:07:32.187 00:07:32.187 Message: 00:07:32.187 ================= 00:07:32.187 Libraries Enabled 00:07:32.187 ================= 00:07:32.187 00:07:32.187 libs: 00:07:32.187 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:32.187 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:32.187 cryptodev, dmadev, power, reorder, security, vhost, 00:07:32.187 00:07:32.187 Message: 00:07:32.187 =============== 00:07:32.187 Drivers Enabled 00:07:32.187 =============== 00:07:32.187 00:07:32.187 common: 00:07:32.187 00:07:32.187 bus: 00:07:32.187 pci, vdev, 00:07:32.187 mempool: 00:07:32.187 ring, 00:07:32.187 dma: 00:07:32.187 00:07:32.187 net: 00:07:32.187 00:07:32.187 crypto: 00:07:32.187 00:07:32.187 compress: 00:07:32.187 00:07:32.187 vdpa: 00:07:32.187 00:07:32.187 00:07:32.187 Message: 00:07:32.187 ================= 00:07:32.187 Content Skipped 00:07:32.187 ================= 00:07:32.187 00:07:32.187 apps: 00:07:32.187 dumpcap: explicitly disabled via build config 00:07:32.187 graph: explicitly disabled via build config 00:07:32.187 pdump: explicitly disabled via build config 00:07:32.187 proc-info: explicitly disabled via build config 00:07:32.187 test-acl: explicitly disabled via build config 00:07:32.187 test-bbdev: explicitly disabled via build config 00:07:32.187 test-cmdline: explicitly disabled via build config 00:07:32.187 test-compress-perf: explicitly disabled via build config 00:07:32.187 test-crypto-perf: explicitly disabled via build config 00:07:32.187 test-dma-perf: explicitly disabled via build config 00:07:32.187 test-eventdev: explicitly disabled via build config 00:07:32.187 test-fib: explicitly disabled via build config 00:07:32.187 test-flow-perf: explicitly disabled via build config 00:07:32.187 test-gpudev: explicitly disabled via build config 00:07:32.187 test-mldev: explicitly disabled via build config 00:07:32.187 test-pipeline: explicitly disabled via build config 00:07:32.187 test-pmd: explicitly disabled via build config 00:07:32.187 test-regex: explicitly disabled via build config 00:07:32.187 test-sad: explicitly disabled via build config 00:07:32.187 test-security-perf: explicitly disabled via build config 00:07:32.187 00:07:32.187 libs: 00:07:32.187 argparse: explicitly disabled via build config 00:07:32.187 metrics: explicitly disabled via build config 00:07:32.187 acl: explicitly disabled via build config 00:07:32.187 bbdev: explicitly disabled via build config 00:07:32.187 bitratestats: explicitly disabled via build config 00:07:32.187 bpf: explicitly disabled via build config 00:07:32.187 cfgfile: explicitly disabled via build config 00:07:32.187 distributor: explicitly disabled via build config 00:07:32.187 efd: explicitly disabled via build config 00:07:32.187 eventdev: explicitly disabled via build config 00:07:32.187 dispatcher: explicitly disabled via build config 00:07:32.187 gpudev: explicitly disabled via build config 00:07:32.187 gro: explicitly disabled via build config 00:07:32.187 gso: explicitly disabled via build config 00:07:32.187 ip_frag: explicitly disabled via build config 00:07:32.187 jobstats: explicitly disabled via build config 00:07:32.187 latencystats: explicitly disabled via build config 00:07:32.187 lpm: explicitly disabled via build config 00:07:32.187 member: explicitly disabled via build config 00:07:32.187 pcapng: explicitly disabled via build config 00:07:32.187 rawdev: explicitly disabled via build config 00:07:32.187 regexdev: explicitly disabled via build config 00:07:32.187 mldev: explicitly disabled via build config 00:07:32.187 rib: explicitly disabled via build config 00:07:32.187 sched: explicitly disabled via build config 00:07:32.187 stack: explicitly disabled via build config 00:07:32.187 ipsec: explicitly disabled via build config 00:07:32.187 pdcp: explicitly disabled via build config 00:07:32.187 fib: explicitly disabled via build config 00:07:32.187 port: explicitly disabled via build config 00:07:32.187 pdump: explicitly disabled via build config 00:07:32.187 table: explicitly disabled via build config 00:07:32.187 pipeline: explicitly disabled via build config 00:07:32.187 graph: explicitly disabled via build config 00:07:32.187 node: explicitly disabled via build config 00:07:32.187 00:07:32.187 drivers: 00:07:32.187 common/cpt: not in enabled drivers build config 00:07:32.187 common/dpaax: not in enabled drivers build config 00:07:32.187 common/iavf: not in enabled drivers build config 00:07:32.187 common/idpf: not in enabled drivers build config 00:07:32.187 common/ionic: not in enabled drivers build config 00:07:32.187 common/mvep: not in enabled drivers build config 00:07:32.187 common/octeontx: not in enabled drivers build config 00:07:32.187 bus/auxiliary: not in enabled drivers build config 00:07:32.187 bus/cdx: not in enabled drivers build config 00:07:32.187 bus/dpaa: not in enabled drivers build config 00:07:32.187 bus/fslmc: not in enabled drivers build config 00:07:32.187 bus/ifpga: not in enabled drivers build config 00:07:32.187 bus/platform: not in enabled drivers build config 00:07:32.187 bus/uacce: not in enabled drivers build config 00:07:32.187 bus/vmbus: not in enabled drivers build config 00:07:32.187 common/cnxk: not in enabled drivers build config 00:07:32.187 common/mlx5: not in enabled drivers build config 00:07:32.187 common/nfp: not in enabled drivers build config 00:07:32.187 common/nitrox: not in enabled drivers build config 00:07:32.187 common/qat: not in enabled drivers build config 00:07:32.187 common/sfc_efx: not in enabled drivers build config 00:07:32.187 mempool/bucket: not in enabled drivers build config 00:07:32.187 mempool/cnxk: not in enabled drivers build config 00:07:32.187 mempool/dpaa: not in enabled drivers build config 00:07:32.187 mempool/dpaa2: not in enabled drivers build config 00:07:32.187 mempool/octeontx: not in enabled drivers build config 00:07:32.187 mempool/stack: not in enabled drivers build config 00:07:32.187 dma/cnxk: not in enabled drivers build config 00:07:32.187 dma/dpaa: not in enabled drivers build config 00:07:32.188 dma/dpaa2: not in enabled drivers build config 00:07:32.188 dma/hisilicon: not in enabled drivers build config 00:07:32.188 dma/idxd: not in enabled drivers build config 00:07:32.188 dma/ioat: not in enabled drivers build config 00:07:32.188 dma/skeleton: not in enabled drivers build config 00:07:32.188 net/af_packet: not in enabled drivers build config 00:07:32.188 net/af_xdp: not in enabled drivers build config 00:07:32.188 net/ark: not in enabled drivers build config 00:07:32.188 net/atlantic: not in enabled drivers build config 00:07:32.188 net/avp: not in enabled drivers build config 00:07:32.188 net/axgbe: not in enabled drivers build config 00:07:32.188 net/bnx2x: not in enabled drivers build config 00:07:32.188 net/bnxt: not in enabled drivers build config 00:07:32.188 net/bonding: not in enabled drivers build config 00:07:32.188 net/cnxk: not in enabled drivers build config 00:07:32.188 net/cpfl: not in enabled drivers build config 00:07:32.188 net/cxgbe: not in enabled drivers build config 00:07:32.188 net/dpaa: not in enabled drivers build config 00:07:32.188 net/dpaa2: not in enabled drivers build config 00:07:32.188 net/e1000: not in enabled drivers build config 00:07:32.188 net/ena: not in enabled drivers build config 00:07:32.188 net/enetc: not in enabled drivers build config 00:07:32.188 net/enetfec: not in enabled drivers build config 00:07:32.188 net/enic: not in enabled drivers build config 00:07:32.188 net/failsafe: not in enabled drivers build config 00:07:32.188 net/fm10k: not in enabled drivers build config 00:07:32.188 net/gve: not in enabled drivers build config 00:07:32.188 net/hinic: not in enabled drivers build config 00:07:32.188 net/hns3: not in enabled drivers build config 00:07:32.188 net/i40e: not in enabled drivers build config 00:07:32.188 net/iavf: not in enabled drivers build config 00:07:32.188 net/ice: not in enabled drivers build config 00:07:32.188 net/idpf: not in enabled drivers build config 00:07:32.188 net/igc: not in enabled drivers build config 00:07:32.188 net/ionic: not in enabled drivers build config 00:07:32.188 net/ipn3ke: not in enabled drivers build config 00:07:32.188 net/ixgbe: not in enabled drivers build config 00:07:32.188 net/mana: not in enabled drivers build config 00:07:32.188 net/memif: not in enabled drivers build config 00:07:32.188 net/mlx4: not in enabled drivers build config 00:07:32.188 net/mlx5: not in enabled drivers build config 00:07:32.188 net/mvneta: not in enabled drivers build config 00:07:32.188 net/mvpp2: not in enabled drivers build config 00:07:32.188 net/netvsc: not in enabled drivers build config 00:07:32.188 net/nfb: not in enabled drivers build config 00:07:32.188 net/nfp: not in enabled drivers build config 00:07:32.188 net/ngbe: not in enabled drivers build config 00:07:32.188 net/null: not in enabled drivers build config 00:07:32.188 net/octeontx: not in enabled drivers build config 00:07:32.188 net/octeon_ep: not in enabled drivers build config 00:07:32.188 net/pcap: not in enabled drivers build config 00:07:32.188 net/pfe: not in enabled drivers build config 00:07:32.188 net/qede: not in enabled drivers build config 00:07:32.188 net/ring: not in enabled drivers build config 00:07:32.188 net/sfc: not in enabled drivers build config 00:07:32.188 net/softnic: not in enabled drivers build config 00:07:32.188 net/tap: not in enabled drivers build config 00:07:32.188 net/thunderx: not in enabled drivers build config 00:07:32.188 net/txgbe: not in enabled drivers build config 00:07:32.188 net/vdev_netvsc: not in enabled drivers build config 00:07:32.188 net/vhost: not in enabled drivers build config 00:07:32.188 net/virtio: not in enabled drivers build config 00:07:32.188 net/vmxnet3: not in enabled drivers build config 00:07:32.188 raw/*: missing internal dependency, "rawdev" 00:07:32.188 crypto/armv8: not in enabled drivers build config 00:07:32.188 crypto/bcmfs: not in enabled drivers build config 00:07:32.188 crypto/caam_jr: not in enabled drivers build config 00:07:32.188 crypto/ccp: not in enabled drivers build config 00:07:32.188 crypto/cnxk: not in enabled drivers build config 00:07:32.188 crypto/dpaa_sec: not in enabled drivers build config 00:07:32.188 crypto/dpaa2_sec: not in enabled drivers build config 00:07:32.188 crypto/ipsec_mb: not in enabled drivers build config 00:07:32.188 crypto/mlx5: not in enabled drivers build config 00:07:32.188 crypto/mvsam: not in enabled drivers build config 00:07:32.188 crypto/nitrox: not in enabled drivers build config 00:07:32.188 crypto/null: not in enabled drivers build config 00:07:32.188 crypto/octeontx: not in enabled drivers build config 00:07:32.188 crypto/openssl: not in enabled drivers build config 00:07:32.188 crypto/scheduler: not in enabled drivers build config 00:07:32.188 crypto/uadk: not in enabled drivers build config 00:07:32.188 crypto/virtio: not in enabled drivers build config 00:07:32.188 compress/isal: not in enabled drivers build config 00:07:32.188 compress/mlx5: not in enabled drivers build config 00:07:32.188 compress/nitrox: not in enabled drivers build config 00:07:32.188 compress/octeontx: not in enabled drivers build config 00:07:32.188 compress/zlib: not in enabled drivers build config 00:07:32.188 regex/*: missing internal dependency, "regexdev" 00:07:32.188 ml/*: missing internal dependency, "mldev" 00:07:32.188 vdpa/ifc: not in enabled drivers build config 00:07:32.188 vdpa/mlx5: not in enabled drivers build config 00:07:32.188 vdpa/nfp: not in enabled drivers build config 00:07:32.188 vdpa/sfc: not in enabled drivers build config 00:07:32.188 event/*: missing internal dependency, "eventdev" 00:07:32.188 baseband/*: missing internal dependency, "bbdev" 00:07:32.188 gpu/*: missing internal dependency, "gpudev" 00:07:32.188 00:07:32.188 00:07:32.754 Build targets in project: 85 00:07:32.754 00:07:32.754 DPDK 24.03.0 00:07:32.754 00:07:32.754 User defined options 00:07:32.754 buildtype : debug 00:07:32.754 default_library : shared 00:07:32.754 libdir : lib 00:07:32.754 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:32.754 b_sanitize : address 00:07:32.754 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:32.754 c_link_args : 00:07:32.754 cpu_instruction_set: native 00:07:32.754 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:32.754 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:32.754 enable_docs : false 00:07:32.754 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:32.754 enable_kmods : false 00:07:32.754 max_lcores : 128 00:07:32.754 tests : false 00:07:32.754 00:07:32.754 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:33.012 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:33.270 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:33.270 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:33.270 [3/268] Linking static target lib/librte_log.a 00:07:33.270 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:33.270 [5/268] Linking static target lib/librte_kvargs.a 00:07:33.270 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:33.837 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:33.837 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:33.837 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:33.837 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:33.837 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:33.837 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:33.837 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.837 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:33.837 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:33.837 [16/268] Linking static target lib/librte_telemetry.a 00:07:33.837 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:34.097 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:34.356 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.356 [20/268] Linking target lib/librte_log.so.24.1 00:07:34.356 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:34.615 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:34.615 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:34.615 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:34.615 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:34.615 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:34.615 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.615 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:34.873 [29/268] Linking target lib/librte_kvargs.so.24.1 00:07:34.873 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:34.873 [31/268] Linking target lib/librte_telemetry.so.24.1 00:07:34.873 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:35.132 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:35.132 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:35.132 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:35.132 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:35.391 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:35.391 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:35.391 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:35.391 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:35.650 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:35.650 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:35.650 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:35.650 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:35.650 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:35.650 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:35.908 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:35.908 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:36.173 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:36.173 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:36.173 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:36.434 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:36.434 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:36.434 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:36.692 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:36.692 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:36.692 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:36.692 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:36.951 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:36.951 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:36.951 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:36.951 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:37.209 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:37.209 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:37.209 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:37.209 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:37.467 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:37.467 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:37.467 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:37.725 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:37.725 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:37.725 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:37.725 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:37.725 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:37.982 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:37.982 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:37.982 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:37.982 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:38.245 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:38.245 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:38.245 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:38.245 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:38.507 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:38.507 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:38.507 [85/268] Linking static target lib/librte_ring.a 00:07:38.507 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:38.765 [87/268] Linking static target lib/librte_eal.a 00:07:38.765 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:38.765 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:38.765 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:38.765 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:38.765 [92/268] Linking static target lib/librte_rcu.a 00:07:38.765 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:38.765 [94/268] Linking static target lib/librte_mempool.a 00:07:39.023 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:39.023 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:39.023 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.281 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:39.281 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:39.281 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.539 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:39.539 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:39.539 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:39.539 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:39.796 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:39.796 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:39.796 [107/268] Linking static target lib/librte_mbuf.a 00:07:40.055 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:40.055 [109/268] Linking static target lib/librte_meter.a 00:07:40.055 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:40.313 [111/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.313 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:40.313 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:40.313 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:40.313 [115/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:40.313 [116/268] Linking static target lib/librte_net.a 00:07:40.313 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.571 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:40.830 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.088 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:41.088 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.088 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:41.347 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:41.347 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:41.605 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:41.605 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:41.605 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:41.605 [128/268] Linking static target lib/librte_pci.a 00:07:41.605 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:41.605 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:41.864 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:41.864 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:41.864 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:41.864 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:42.122 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.122 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:42.122 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:42.123 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:42.123 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:42.123 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:42.123 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:42.123 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:42.123 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:42.123 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:42.381 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:42.381 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:42.381 [147/268] Linking static target lib/librte_cmdline.a 00:07:42.639 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:42.898 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:42.898 [150/268] Linking static target lib/librte_timer.a 00:07:42.898 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:42.898 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:42.898 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:43.176 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:43.176 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:43.463 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:43.463 [157/268] Linking static target lib/librte_ethdev.a 00:07:43.463 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.721 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:43.721 [160/268] Linking static target lib/librte_compressdev.a 00:07:43.721 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:43.721 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:43.980 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:43.980 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:43.980 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:43.980 [166/268] Linking static target lib/librte_hash.a 00:07:44.239 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:44.239 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:44.239 [169/268] Linking static target lib/librte_dmadev.a 00:07:44.239 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.498 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:44.498 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:44.498 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:44.757 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:44.757 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.757 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:45.015 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:45.015 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:45.015 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:45.015 [180/268] Linking static target lib/librte_cryptodev.a 00:07:45.015 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:45.015 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:45.274 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.274 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.532 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:45.532 [186/268] Linking static target lib/librte_power.a 00:07:45.532 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:45.532 [188/268] Linking static target lib/librte_reorder.a 00:07:45.532 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:45.791 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:45.791 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:45.791 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:45.791 [193/268] Linking static target lib/librte_security.a 00:07:46.049 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.308 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:46.874 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:46.874 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.874 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.874 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:46.874 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:47.132 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:47.132 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:47.390 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:47.390 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:47.390 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:47.648 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:47.648 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.648 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:47.648 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:47.648 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:47.904 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:48.162 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:48.162 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:48.162 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.162 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:48.162 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:48.162 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.162 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:48.162 [219/268] Linking static target drivers/librte_bus_pci.a 00:07:48.162 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:48.162 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:48.420 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:48.420 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:48.420 [224/268] Linking static target drivers/librte_mempool_ring.a 00:07:48.420 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:48.420 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.986 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.553 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:51.454 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.454 [230/268] Linking target lib/librte_eal.so.24.1 00:07:51.454 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:51.454 [232/268] Linking target lib/librte_meter.so.24.1 00:07:51.454 [233/268] Linking target lib/librte_pci.so.24.1 00:07:51.454 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:51.454 [235/268] Linking target lib/librte_ring.so.24.1 00:07:51.454 [236/268] Linking target lib/librte_timer.so.24.1 00:07:51.454 [237/268] Linking target lib/librte_dmadev.so.24.1 00:07:51.454 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:51.454 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:51.711 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:51.712 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:51.712 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:51.712 [243/268] Linking target lib/librte_rcu.so.24.1 00:07:51.712 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:51.712 [245/268] Linking target lib/librte_mempool.so.24.1 00:07:51.969 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.969 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:51.969 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:51.969 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:51.969 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:52.227 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:52.227 [252/268] Linking target lib/librte_compressdev.so.24.1 00:07:52.227 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:07:52.227 [254/268] Linking target lib/librte_reorder.so.24.1 00:07:52.227 [255/268] Linking target lib/librte_net.so.24.1 00:07:52.485 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:52.485 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:52.485 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:52.485 [259/268] Linking target lib/librte_security.so.24.1 00:07:52.485 [260/268] Linking target lib/librte_hash.so.24.1 00:07:52.485 [261/268] Linking target lib/librte_ethdev.so.24.1 00:07:52.743 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:52.743 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:52.743 [264/268] Linking target lib/librte_power.so.24.1 00:07:54.651 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:54.651 [266/268] Linking static target lib/librte_vhost.a 00:07:56.024 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.285 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:56.285 INFO: autodetecting backend as ninja 00:07:56.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:18.210 CC lib/ut/ut.o 00:08:18.210 CC lib/log/log.o 00:08:18.210 CC lib/log/log_flags.o 00:08:18.210 CC lib/log/log_deprecated.o 00:08:18.210 CC lib/ut_mock/mock.o 00:08:18.210 LIB libspdk_ut_mock.a 00:08:18.210 LIB libspdk_ut.a 00:08:18.210 LIB libspdk_log.a 00:08:18.210 SO libspdk_ut_mock.so.6.0 00:08:18.210 SO libspdk_ut.so.2.0 00:08:18.210 SO libspdk_log.so.7.1 00:08:18.210 SYMLINK libspdk_ut_mock.so 00:08:18.210 SYMLINK libspdk_ut.so 00:08:18.210 SYMLINK libspdk_log.so 00:08:18.210 CC lib/util/base64.o 00:08:18.210 CXX lib/trace_parser/trace.o 00:08:18.210 CC lib/util/cpuset.o 00:08:18.210 CC lib/util/bit_array.o 00:08:18.210 CC lib/util/crc32c.o 00:08:18.210 CC lib/util/crc16.o 00:08:18.210 CC lib/util/crc32.o 00:08:18.210 CC lib/dma/dma.o 00:08:18.210 CC lib/ioat/ioat.o 00:08:18.210 CC lib/util/crc32_ieee.o 00:08:18.210 CC lib/vfio_user/host/vfio_user_pci.o 00:08:18.210 CC lib/util/crc64.o 00:08:18.210 CC lib/util/dif.o 00:08:18.210 LIB libspdk_dma.a 00:08:18.210 CC lib/util/fd.o 00:08:18.210 SO libspdk_dma.so.5.0 00:08:18.210 CC lib/util/fd_group.o 00:08:18.210 CC lib/util/file.o 00:08:18.210 SYMLINK libspdk_dma.so 00:08:18.210 CC lib/vfio_user/host/vfio_user.o 00:08:18.210 CC lib/util/hexlify.o 00:08:18.210 CC lib/util/iov.o 00:08:18.210 LIB libspdk_ioat.a 00:08:18.210 CC lib/util/math.o 00:08:18.210 SO libspdk_ioat.so.7.0 00:08:18.210 CC lib/util/net.o 00:08:18.210 CC lib/util/pipe.o 00:08:18.210 SYMLINK libspdk_ioat.so 00:08:18.210 CC lib/util/strerror_tls.o 00:08:18.210 CC lib/util/string.o 00:08:18.210 LIB libspdk_vfio_user.a 00:08:18.210 CC lib/util/uuid.o 00:08:18.210 SO libspdk_vfio_user.so.5.0 00:08:18.210 CC lib/util/xor.o 00:08:18.210 CC lib/util/zipf.o 00:08:18.210 CC lib/util/md5.o 00:08:18.210 SYMLINK libspdk_vfio_user.so 00:08:18.210 LIB libspdk_util.a 00:08:18.210 SO libspdk_util.so.10.1 00:08:18.210 LIB libspdk_trace_parser.a 00:08:18.210 SO libspdk_trace_parser.so.6.0 00:08:18.210 SYMLINK libspdk_util.so 00:08:18.210 SYMLINK libspdk_trace_parser.so 00:08:18.210 CC lib/json/json_parse.o 00:08:18.210 CC lib/json/json_util.o 00:08:18.210 CC lib/json/json_write.o 00:08:18.210 CC lib/idxd/idxd.o 00:08:18.210 CC lib/idxd/idxd_kernel.o 00:08:18.210 CC lib/idxd/idxd_user.o 00:08:18.210 CC lib/vmd/vmd.o 00:08:18.210 CC lib/rdma_utils/rdma_utils.o 00:08:18.210 CC lib/conf/conf.o 00:08:18.210 CC lib/env_dpdk/env.o 00:08:18.210 CC lib/env_dpdk/memory.o 00:08:18.210 CC lib/vmd/led.o 00:08:18.210 CC lib/env_dpdk/pci.o 00:08:18.210 LIB libspdk_rdma_utils.a 00:08:18.210 LIB libspdk_json.a 00:08:18.210 LIB libspdk_conf.a 00:08:18.210 SO libspdk_rdma_utils.so.1.0 00:08:18.210 CC lib/env_dpdk/init.o 00:08:18.210 SO libspdk_conf.so.6.0 00:08:18.210 SO libspdk_json.so.6.0 00:08:18.210 SYMLINK libspdk_rdma_utils.so 00:08:18.210 CC lib/env_dpdk/threads.o 00:08:18.210 CC lib/env_dpdk/pci_ioat.o 00:08:18.210 SYMLINK libspdk_conf.so 00:08:18.210 CC lib/env_dpdk/pci_virtio.o 00:08:18.210 SYMLINK libspdk_json.so 00:08:18.210 CC lib/env_dpdk/pci_vmd.o 00:08:18.210 CC lib/env_dpdk/pci_idxd.o 00:08:18.210 CC lib/env_dpdk/pci_event.o 00:08:18.210 CC lib/env_dpdk/sigbus_handler.o 00:08:18.468 CC lib/env_dpdk/pci_dpdk.o 00:08:18.468 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:18.468 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:18.468 LIB libspdk_idxd.a 00:08:18.468 LIB libspdk_vmd.a 00:08:18.468 SO libspdk_idxd.so.12.1 00:08:18.468 SO libspdk_vmd.so.6.0 00:08:18.727 SYMLINK libspdk_idxd.so 00:08:18.727 SYMLINK libspdk_vmd.so 00:08:18.727 CC lib/rdma_provider/common.o 00:08:18.727 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:18.727 CC lib/jsonrpc/jsonrpc_server.o 00:08:18.727 CC lib/jsonrpc/jsonrpc_client.o 00:08:18.727 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:18.727 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:18.985 LIB libspdk_rdma_provider.a 00:08:18.985 SO libspdk_rdma_provider.so.7.0 00:08:18.985 LIB libspdk_jsonrpc.a 00:08:18.985 SYMLINK libspdk_rdma_provider.so 00:08:18.985 SO libspdk_jsonrpc.so.6.0 00:08:19.244 SYMLINK libspdk_jsonrpc.so 00:08:19.502 CC lib/rpc/rpc.o 00:08:19.502 LIB libspdk_env_dpdk.a 00:08:19.502 SO libspdk_env_dpdk.so.15.1 00:08:19.760 LIB libspdk_rpc.a 00:08:19.760 SO libspdk_rpc.so.6.0 00:08:19.760 SYMLINK libspdk_env_dpdk.so 00:08:19.760 SYMLINK libspdk_rpc.so 00:08:20.018 CC lib/trace/trace.o 00:08:20.018 CC lib/trace/trace_flags.o 00:08:20.018 CC lib/trace/trace_rpc.o 00:08:20.018 CC lib/keyring/keyring.o 00:08:20.018 CC lib/keyring/keyring_rpc.o 00:08:20.018 CC lib/notify/notify.o 00:08:20.018 CC lib/notify/notify_rpc.o 00:08:20.276 LIB libspdk_notify.a 00:08:20.276 SO libspdk_notify.so.6.0 00:08:20.276 SYMLINK libspdk_notify.so 00:08:20.276 LIB libspdk_trace.a 00:08:20.276 LIB libspdk_keyring.a 00:08:20.276 SO libspdk_trace.so.11.0 00:08:20.276 SO libspdk_keyring.so.2.0 00:08:20.534 SYMLINK libspdk_trace.so 00:08:20.534 SYMLINK libspdk_keyring.so 00:08:20.534 CC lib/thread/thread.o 00:08:20.792 CC lib/thread/iobuf.o 00:08:20.792 CC lib/sock/sock.o 00:08:20.792 CC lib/sock/sock_rpc.o 00:08:21.358 LIB libspdk_sock.a 00:08:21.358 SO libspdk_sock.so.10.0 00:08:21.358 SYMLINK libspdk_sock.so 00:08:21.926 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:21.926 CC lib/nvme/nvme_ctrlr.o 00:08:21.926 CC lib/nvme/nvme_fabric.o 00:08:21.926 CC lib/nvme/nvme_pcie_common.o 00:08:21.926 CC lib/nvme/nvme_ns_cmd.o 00:08:21.926 CC lib/nvme/nvme_qpair.o 00:08:21.926 CC lib/nvme/nvme.o 00:08:21.926 CC lib/nvme/nvme_pcie.o 00:08:21.926 CC lib/nvme/nvme_ns.o 00:08:22.491 LIB libspdk_thread.a 00:08:22.491 SO libspdk_thread.so.11.0 00:08:22.491 SYMLINK libspdk_thread.so 00:08:22.491 CC lib/nvme/nvme_quirks.o 00:08:22.491 CC lib/nvme/nvme_transport.o 00:08:22.491 CC lib/nvme/nvme_discovery.o 00:08:22.491 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:22.749 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:22.749 CC lib/accel/accel.o 00:08:23.007 CC lib/blob/blobstore.o 00:08:23.007 CC lib/init/json_config.o 00:08:23.007 CC lib/virtio/virtio.o 00:08:23.007 CC lib/virtio/virtio_vhost_user.o 00:08:23.265 CC lib/init/subsystem.o 00:08:23.265 CC lib/virtio/virtio_vfio_user.o 00:08:23.265 CC lib/blob/request.o 00:08:23.265 CC lib/init/subsystem_rpc.o 00:08:23.265 CC lib/blob/zeroes.o 00:08:23.523 CC lib/virtio/virtio_pci.o 00:08:23.523 CC lib/fsdev/fsdev.o 00:08:23.523 CC lib/nvme/nvme_tcp.o 00:08:23.523 CC lib/accel/accel_rpc.o 00:08:23.523 CC lib/init/rpc.o 00:08:23.523 CC lib/blob/blob_bs_dev.o 00:08:23.781 LIB libspdk_init.a 00:08:23.781 LIB libspdk_virtio.a 00:08:23.781 CC lib/accel/accel_sw.o 00:08:23.781 SO libspdk_init.so.6.0 00:08:23.781 SO libspdk_virtio.so.7.0 00:08:23.781 CC lib/fsdev/fsdev_io.o 00:08:23.781 CC lib/fsdev/fsdev_rpc.o 00:08:24.040 SYMLINK libspdk_init.so 00:08:24.040 SYMLINK libspdk_virtio.so 00:08:24.040 CC lib/nvme/nvme_opal.o 00:08:24.040 CC lib/event/app.o 00:08:24.040 CC lib/nvme/nvme_io_msg.o 00:08:24.040 CC lib/nvme/nvme_poll_group.o 00:08:24.040 LIB libspdk_accel.a 00:08:24.299 SO libspdk_accel.so.16.0 00:08:24.299 CC lib/nvme/nvme_zns.o 00:08:24.299 LIB libspdk_fsdev.a 00:08:24.299 SYMLINK libspdk_accel.so 00:08:24.299 SO libspdk_fsdev.so.2.0 00:08:24.578 SYMLINK libspdk_fsdev.so 00:08:24.578 CC lib/nvme/nvme_stubs.o 00:08:24.578 CC lib/bdev/bdev.o 00:08:24.578 CC lib/bdev/bdev_rpc.o 00:08:24.578 CC lib/event/reactor.o 00:08:24.578 CC lib/event/log_rpc.o 00:08:24.881 CC lib/event/app_rpc.o 00:08:24.881 CC lib/event/scheduler_static.o 00:08:24.881 CC lib/bdev/bdev_zone.o 00:08:24.881 CC lib/bdev/part.o 00:08:24.881 CC lib/bdev/scsi_nvme.o 00:08:24.881 CC lib/nvme/nvme_auth.o 00:08:24.881 CC lib/nvme/nvme_cuse.o 00:08:25.139 CC lib/nvme/nvme_rdma.o 00:08:25.139 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:25.139 LIB libspdk_event.a 00:08:25.139 SO libspdk_event.so.14.0 00:08:25.398 SYMLINK libspdk_event.so 00:08:25.965 LIB libspdk_fuse_dispatcher.a 00:08:25.965 SO libspdk_fuse_dispatcher.so.1.0 00:08:25.965 SYMLINK libspdk_fuse_dispatcher.so 00:08:26.532 LIB libspdk_nvme.a 00:08:26.791 SO libspdk_nvme.so.15.0 00:08:27.050 LIB libspdk_blob.a 00:08:27.050 SO libspdk_blob.so.11.0 00:08:27.050 SYMLINK libspdk_nvme.so 00:08:27.050 SYMLINK libspdk_blob.so 00:08:27.308 CC lib/blobfs/blobfs.o 00:08:27.308 CC lib/blobfs/tree.o 00:08:27.566 CC lib/lvol/lvol.o 00:08:27.823 LIB libspdk_bdev.a 00:08:27.823 SO libspdk_bdev.so.17.0 00:08:28.081 SYMLINK libspdk_bdev.so 00:08:28.340 CC lib/ublk/ublk.o 00:08:28.340 CC lib/ublk/ublk_rpc.o 00:08:28.340 CC lib/nbd/nbd_rpc.o 00:08:28.340 CC lib/scsi/dev.o 00:08:28.340 CC lib/ftl/ftl_core.o 00:08:28.340 CC lib/scsi/lun.o 00:08:28.340 CC lib/nbd/nbd.o 00:08:28.340 CC lib/nvmf/ctrlr.o 00:08:28.340 LIB libspdk_blobfs.a 00:08:28.340 CC lib/nvmf/ctrlr_discovery.o 00:08:28.340 SO libspdk_blobfs.so.10.0 00:08:28.597 CC lib/nvmf/ctrlr_bdev.o 00:08:28.597 CC lib/nvmf/subsystem.o 00:08:28.597 SYMLINK libspdk_blobfs.so 00:08:28.597 CC lib/nvmf/nvmf.o 00:08:28.597 CC lib/scsi/port.o 00:08:28.597 LIB libspdk_lvol.a 00:08:28.597 SO libspdk_lvol.so.10.0 00:08:28.855 LIB libspdk_nbd.a 00:08:28.855 SO libspdk_nbd.so.7.0 00:08:28.855 CC lib/ftl/ftl_init.o 00:08:28.855 SYMLINK libspdk_lvol.so 00:08:28.855 CC lib/scsi/scsi.o 00:08:28.855 CC lib/scsi/scsi_bdev.o 00:08:28.855 SYMLINK libspdk_nbd.so 00:08:28.855 CC lib/scsi/scsi_pr.o 00:08:29.113 CC lib/scsi/scsi_rpc.o 00:08:29.113 CC lib/ftl/ftl_layout.o 00:08:29.114 CC lib/nvmf/nvmf_rpc.o 00:08:29.114 CC lib/ftl/ftl_debug.o 00:08:29.371 LIB libspdk_ublk.a 00:08:29.371 SO libspdk_ublk.so.3.0 00:08:29.371 CC lib/ftl/ftl_io.o 00:08:29.371 SYMLINK libspdk_ublk.so 00:08:29.371 CC lib/ftl/ftl_sb.o 00:08:29.629 CC lib/ftl/ftl_l2p.o 00:08:29.629 CC lib/scsi/task.o 00:08:29.629 CC lib/nvmf/transport.o 00:08:29.629 CC lib/nvmf/tcp.o 00:08:29.629 CC lib/nvmf/stubs.o 00:08:29.629 CC lib/ftl/ftl_l2p_flat.o 00:08:29.629 CC lib/ftl/ftl_nv_cache.o 00:08:29.629 LIB libspdk_scsi.a 00:08:29.629 CC lib/nvmf/mdns_server.o 00:08:29.888 SO libspdk_scsi.so.9.0 00:08:29.888 SYMLINK libspdk_scsi.so 00:08:29.888 CC lib/nvmf/rdma.o 00:08:29.888 CC lib/nvmf/auth.o 00:08:30.146 CC lib/ftl/ftl_band.o 00:08:30.146 CC lib/ftl/ftl_band_ops.o 00:08:30.405 CC lib/iscsi/conn.o 00:08:30.405 CC lib/iscsi/init_grp.o 00:08:30.405 CC lib/vhost/vhost.o 00:08:30.664 CC lib/vhost/vhost_rpc.o 00:08:30.664 CC lib/vhost/vhost_scsi.o 00:08:30.664 CC lib/vhost/vhost_blk.o 00:08:30.922 CC lib/vhost/rte_vhost_user.o 00:08:30.922 CC lib/ftl/ftl_writer.o 00:08:30.922 CC lib/ftl/ftl_rq.o 00:08:31.180 CC lib/iscsi/iscsi.o 00:08:31.180 CC lib/ftl/ftl_reloc.o 00:08:31.180 CC lib/iscsi/param.o 00:08:31.180 CC lib/iscsi/portal_grp.o 00:08:31.180 CC lib/iscsi/tgt_node.o 00:08:31.437 CC lib/ftl/ftl_l2p_cache.o 00:08:31.437 CC lib/iscsi/iscsi_subsystem.o 00:08:31.695 CC lib/iscsi/iscsi_rpc.o 00:08:31.695 CC lib/ftl/ftl_p2l.o 00:08:31.695 CC lib/ftl/ftl_p2l_log.o 00:08:31.695 CC lib/ftl/mngt/ftl_mngt.o 00:08:31.695 CC lib/iscsi/task.o 00:08:31.953 LIB libspdk_vhost.a 00:08:31.953 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:31.953 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:31.953 SO libspdk_vhost.so.8.0 00:08:31.953 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:31.953 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:31.953 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:32.212 SYMLINK libspdk_vhost.so 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:32.212 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:32.212 CC lib/ftl/utils/ftl_conf.o 00:08:32.469 CC lib/ftl/utils/ftl_md.o 00:08:32.469 CC lib/ftl/utils/ftl_mempool.o 00:08:32.469 CC lib/ftl/utils/ftl_bitmap.o 00:08:32.470 CC lib/ftl/utils/ftl_property.o 00:08:32.470 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:32.470 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:32.470 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:32.470 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:32.727 LIB libspdk_nvmf.a 00:08:32.727 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:32.727 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:32.727 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:32.727 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:32.727 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:32.728 SO libspdk_nvmf.so.20.0 00:08:32.728 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:32.728 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:32.986 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:32.986 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:32.986 LIB libspdk_iscsi.a 00:08:32.986 CC lib/ftl/base/ftl_base_dev.o 00:08:32.986 CC lib/ftl/base/ftl_base_bdev.o 00:08:32.986 CC lib/ftl/ftl_trace.o 00:08:32.986 SO libspdk_iscsi.so.8.0 00:08:32.986 SYMLINK libspdk_nvmf.so 00:08:33.243 SYMLINK libspdk_iscsi.so 00:08:33.243 LIB libspdk_ftl.a 00:08:33.501 SO libspdk_ftl.so.9.0 00:08:33.758 SYMLINK libspdk_ftl.so 00:08:34.323 CC module/env_dpdk/env_dpdk_rpc.o 00:08:34.323 CC module/keyring/file/keyring.o 00:08:34.323 CC module/keyring/linux/keyring.o 00:08:34.323 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:34.323 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:34.323 CC module/scheduler/gscheduler/gscheduler.o 00:08:34.323 CC module/sock/posix/posix.o 00:08:34.323 CC module/fsdev/aio/fsdev_aio.o 00:08:34.323 CC module/blob/bdev/blob_bdev.o 00:08:34.323 CC module/accel/error/accel_error.o 00:08:34.323 LIB libspdk_env_dpdk_rpc.a 00:08:34.323 SO libspdk_env_dpdk_rpc.so.6.0 00:08:34.323 CC module/keyring/linux/keyring_rpc.o 00:08:34.323 CC module/keyring/file/keyring_rpc.o 00:08:34.323 LIB libspdk_scheduler_dpdk_governor.a 00:08:34.323 SYMLINK libspdk_env_dpdk_rpc.so 00:08:34.323 LIB libspdk_scheduler_gscheduler.a 00:08:34.323 CC module/accel/error/accel_error_rpc.o 00:08:34.323 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:34.581 SO libspdk_scheduler_gscheduler.so.4.0 00:08:34.581 LIB libspdk_scheduler_dynamic.a 00:08:34.581 SO libspdk_scheduler_dynamic.so.4.0 00:08:34.581 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:34.581 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:34.581 CC module/fsdev/aio/linux_aio_mgr.o 00:08:34.581 SYMLINK libspdk_scheduler_gscheduler.so 00:08:34.581 LIB libspdk_keyring_file.a 00:08:34.581 LIB libspdk_keyring_linux.a 00:08:34.581 SYMLINK libspdk_scheduler_dynamic.so 00:08:34.581 LIB libspdk_accel_error.a 00:08:34.581 SO libspdk_keyring_file.so.2.0 00:08:34.581 SO libspdk_keyring_linux.so.1.0 00:08:34.581 LIB libspdk_blob_bdev.a 00:08:34.581 SO libspdk_accel_error.so.2.0 00:08:34.581 SO libspdk_blob_bdev.so.11.0 00:08:34.581 SYMLINK libspdk_keyring_linux.so 00:08:34.581 SYMLINK libspdk_keyring_file.so 00:08:34.581 SYMLINK libspdk_blob_bdev.so 00:08:34.581 SYMLINK libspdk_accel_error.so 00:08:34.838 CC module/accel/ioat/accel_ioat.o 00:08:34.838 CC module/accel/ioat/accel_ioat_rpc.o 00:08:34.838 CC module/accel/dsa/accel_dsa.o 00:08:34.838 CC module/accel/dsa/accel_dsa_rpc.o 00:08:34.838 CC module/accel/iaa/accel_iaa.o 00:08:34.838 CC module/accel/iaa/accel_iaa_rpc.o 00:08:35.096 LIB libspdk_accel_ioat.a 00:08:35.096 CC module/bdev/error/vbdev_error.o 00:08:35.096 CC module/bdev/delay/vbdev_delay.o 00:08:35.096 CC module/blobfs/bdev/blobfs_bdev.o 00:08:35.096 SO libspdk_accel_ioat.so.6.0 00:08:35.096 CC module/bdev/gpt/gpt.o 00:08:35.096 SYMLINK libspdk_accel_ioat.so 00:08:35.096 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:35.096 LIB libspdk_fsdev_aio.a 00:08:35.096 LIB libspdk_accel_dsa.a 00:08:35.096 LIB libspdk_accel_iaa.a 00:08:35.096 SO libspdk_fsdev_aio.so.1.0 00:08:35.096 SO libspdk_accel_iaa.so.3.0 00:08:35.096 SO libspdk_accel_dsa.so.5.0 00:08:35.096 LIB libspdk_sock_posix.a 00:08:35.096 SO libspdk_sock_posix.so.6.0 00:08:35.355 SYMLINK libspdk_accel_dsa.so 00:08:35.355 SYMLINK libspdk_accel_iaa.so 00:08:35.355 SYMLINK libspdk_fsdev_aio.so 00:08:35.355 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:35.355 LIB libspdk_blobfs_bdev.a 00:08:35.355 CC module/bdev/error/vbdev_error_rpc.o 00:08:35.355 SYMLINK libspdk_sock_posix.so 00:08:35.355 CC module/bdev/lvol/vbdev_lvol.o 00:08:35.355 CC module/bdev/gpt/vbdev_gpt.o 00:08:35.355 SO libspdk_blobfs_bdev.so.6.0 00:08:35.355 SYMLINK libspdk_blobfs_bdev.so 00:08:35.355 CC module/bdev/malloc/bdev_malloc.o 00:08:35.355 CC module/bdev/null/bdev_null.o 00:08:35.355 CC module/bdev/nvme/bdev_nvme.o 00:08:35.355 LIB libspdk_bdev_delay.a 00:08:35.355 CC module/bdev/passthru/vbdev_passthru.o 00:08:35.355 LIB libspdk_bdev_error.a 00:08:35.355 SO libspdk_bdev_delay.so.6.0 00:08:35.613 SO libspdk_bdev_error.so.6.0 00:08:35.613 SYMLINK libspdk_bdev_delay.so 00:08:35.613 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:35.613 CC module/bdev/raid/bdev_raid.o 00:08:35.613 SYMLINK libspdk_bdev_error.so 00:08:35.613 CC module/bdev/split/vbdev_split.o 00:08:35.613 CC module/bdev/split/vbdev_split_rpc.o 00:08:35.613 LIB libspdk_bdev_gpt.a 00:08:35.613 SO libspdk_bdev_gpt.so.6.0 00:08:35.613 SYMLINK libspdk_bdev_gpt.so 00:08:35.613 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:35.613 CC module/bdev/null/bdev_null_rpc.o 00:08:35.871 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:35.871 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:35.871 LIB libspdk_bdev_split.a 00:08:35.871 SO libspdk_bdev_split.so.6.0 00:08:35.871 LIB libspdk_bdev_null.a 00:08:35.871 SYMLINK libspdk_bdev_split.so 00:08:35.871 SO libspdk_bdev_null.so.6.0 00:08:35.871 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:36.132 LIB libspdk_bdev_passthru.a 00:08:36.132 LIB libspdk_bdev_malloc.a 00:08:36.132 SYMLINK libspdk_bdev_null.so 00:08:36.132 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:36.132 CC module/bdev/xnvme/bdev_xnvme.o 00:08:36.132 SO libspdk_bdev_passthru.so.6.0 00:08:36.132 SO libspdk_bdev_malloc.so.6.0 00:08:36.132 CC module/bdev/aio/bdev_aio.o 00:08:36.132 LIB libspdk_bdev_lvol.a 00:08:36.132 SYMLINK libspdk_bdev_passthru.so 00:08:36.132 SYMLINK libspdk_bdev_malloc.so 00:08:36.132 CC module/bdev/raid/bdev_raid_rpc.o 00:08:36.132 CC module/bdev/raid/bdev_raid_sb.o 00:08:36.132 SO libspdk_bdev_lvol.so.6.0 00:08:36.132 CC module/bdev/raid/raid0.o 00:08:36.389 SYMLINK libspdk_bdev_lvol.so 00:08:36.389 CC module/bdev/raid/raid1.o 00:08:36.389 CC module/bdev/raid/concat.o 00:08:36.389 LIB libspdk_bdev_zone_block.a 00:08:36.389 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:08:36.389 CC module/bdev/aio/bdev_aio_rpc.o 00:08:36.389 SO libspdk_bdev_zone_block.so.6.0 00:08:36.389 CC module/bdev/nvme/nvme_rpc.o 00:08:36.647 SYMLINK libspdk_bdev_zone_block.so 00:08:36.647 CC module/bdev/nvme/bdev_mdns_client.o 00:08:36.647 CC module/bdev/nvme/vbdev_opal.o 00:08:36.647 LIB libspdk_bdev_xnvme.a 00:08:36.647 LIB libspdk_bdev_aio.a 00:08:36.647 SO libspdk_bdev_xnvme.so.3.0 00:08:36.647 SO libspdk_bdev_aio.so.6.0 00:08:36.647 CC module/bdev/ftl/bdev_ftl.o 00:08:36.647 SYMLINK libspdk_bdev_xnvme.so 00:08:36.647 SYMLINK libspdk_bdev_aio.so 00:08:36.647 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:36.647 CC module/bdev/iscsi/bdev_iscsi.o 00:08:36.647 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:36.647 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:36.647 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:36.647 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:36.905 LIB libspdk_bdev_raid.a 00:08:36.905 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:36.905 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:36.905 SO libspdk_bdev_raid.so.6.0 00:08:36.905 LIB libspdk_bdev_ftl.a 00:08:37.165 SYMLINK libspdk_bdev_raid.so 00:08:37.165 SO libspdk_bdev_ftl.so.6.0 00:08:37.165 SYMLINK libspdk_bdev_ftl.so 00:08:37.165 LIB libspdk_bdev_iscsi.a 00:08:37.165 SO libspdk_bdev_iscsi.so.6.0 00:08:37.165 SYMLINK libspdk_bdev_iscsi.so 00:08:37.423 LIB libspdk_bdev_virtio.a 00:08:37.423 SO libspdk_bdev_virtio.so.6.0 00:08:37.423 SYMLINK libspdk_bdev_virtio.so 00:08:38.795 LIB libspdk_bdev_nvme.a 00:08:38.795 SO libspdk_bdev_nvme.so.7.1 00:08:38.795 SYMLINK libspdk_bdev_nvme.so 00:08:39.361 CC module/event/subsystems/iobuf/iobuf.o 00:08:39.361 CC module/event/subsystems/vmd/vmd.o 00:08:39.361 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:39.361 CC module/event/subsystems/scheduler/scheduler.o 00:08:39.361 CC module/event/subsystems/fsdev/fsdev.o 00:08:39.361 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:39.361 CC module/event/subsystems/keyring/keyring.o 00:08:39.361 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:39.361 CC module/event/subsystems/sock/sock.o 00:08:39.361 LIB libspdk_event_sock.a 00:08:39.361 LIB libspdk_event_vhost_blk.a 00:08:39.361 LIB libspdk_event_scheduler.a 00:08:39.361 LIB libspdk_event_keyring.a 00:08:39.361 LIB libspdk_event_vmd.a 00:08:39.361 SO libspdk_event_sock.so.5.0 00:08:39.361 SO libspdk_event_vhost_blk.so.3.0 00:08:39.361 LIB libspdk_event_fsdev.a 00:08:39.361 LIB libspdk_event_iobuf.a 00:08:39.361 SO libspdk_event_scheduler.so.4.0 00:08:39.361 SO libspdk_event_keyring.so.1.0 00:08:39.619 SO libspdk_event_vmd.so.6.0 00:08:39.619 SO libspdk_event_fsdev.so.1.0 00:08:39.619 SO libspdk_event_iobuf.so.3.0 00:08:39.619 SYMLINK libspdk_event_vhost_blk.so 00:08:39.619 SYMLINK libspdk_event_sock.so 00:08:39.619 SYMLINK libspdk_event_scheduler.so 00:08:39.619 SYMLINK libspdk_event_keyring.so 00:08:39.619 SYMLINK libspdk_event_vmd.so 00:08:39.619 SYMLINK libspdk_event_fsdev.so 00:08:39.619 SYMLINK libspdk_event_iobuf.so 00:08:39.877 CC module/event/subsystems/accel/accel.o 00:08:40.136 LIB libspdk_event_accel.a 00:08:40.136 SO libspdk_event_accel.so.6.0 00:08:40.136 SYMLINK libspdk_event_accel.so 00:08:40.396 CC module/event/subsystems/bdev/bdev.o 00:08:40.654 LIB libspdk_event_bdev.a 00:08:40.654 SO libspdk_event_bdev.so.6.0 00:08:40.912 SYMLINK libspdk_event_bdev.so 00:08:41.170 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:41.170 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:41.170 CC module/event/subsystems/ublk/ublk.o 00:08:41.170 CC module/event/subsystems/scsi/scsi.o 00:08:41.170 CC module/event/subsystems/nbd/nbd.o 00:08:41.428 LIB libspdk_event_scsi.a 00:08:41.428 LIB libspdk_event_nbd.a 00:08:41.428 SO libspdk_event_scsi.so.6.0 00:08:41.428 LIB libspdk_event_ublk.a 00:08:41.428 SO libspdk_event_nbd.so.6.0 00:08:41.428 SO libspdk_event_ublk.so.3.0 00:08:41.428 SYMLINK libspdk_event_nbd.so 00:08:41.428 SYMLINK libspdk_event_scsi.so 00:08:41.428 LIB libspdk_event_nvmf.a 00:08:41.428 SYMLINK libspdk_event_ublk.so 00:08:41.428 SO libspdk_event_nvmf.so.6.0 00:08:41.687 SYMLINK libspdk_event_nvmf.so 00:08:41.687 CC module/event/subsystems/iscsi/iscsi.o 00:08:41.687 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:41.945 LIB libspdk_event_vhost_scsi.a 00:08:41.945 LIB libspdk_event_iscsi.a 00:08:41.945 SO libspdk_event_vhost_scsi.so.3.0 00:08:41.945 SO libspdk_event_iscsi.so.6.0 00:08:41.945 SYMLINK libspdk_event_vhost_scsi.so 00:08:41.945 SYMLINK libspdk_event_iscsi.so 00:08:42.203 SO libspdk.so.6.0 00:08:42.203 SYMLINK libspdk.so 00:08:42.462 CC app/trace_record/trace_record.o 00:08:42.462 CXX app/trace/trace.o 00:08:42.462 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:42.462 CC app/iscsi_tgt/iscsi_tgt.o 00:08:42.721 CC app/nvmf_tgt/nvmf_main.o 00:08:42.721 CC examples/util/zipf/zipf.o 00:08:42.721 CC test/thread/poller_perf/poller_perf.o 00:08:42.721 CC examples/ioat/perf/perf.o 00:08:42.721 CC test/app/bdev_svc/bdev_svc.o 00:08:42.721 CC test/dma/test_dma/test_dma.o 00:08:42.980 LINK nvmf_tgt 00:08:42.980 LINK zipf 00:08:42.980 LINK poller_perf 00:08:42.980 LINK interrupt_tgt 00:08:42.980 LINK iscsi_tgt 00:08:42.980 LINK spdk_trace_record 00:08:42.980 LINK ioat_perf 00:08:42.980 LINK bdev_svc 00:08:43.238 CC examples/ioat/verify/verify.o 00:08:43.238 TEST_HEADER include/spdk/accel.h 00:08:43.238 TEST_HEADER include/spdk/accel_module.h 00:08:43.238 TEST_HEADER include/spdk/assert.h 00:08:43.238 LINK spdk_trace 00:08:43.238 TEST_HEADER include/spdk/barrier.h 00:08:43.238 TEST_HEADER include/spdk/base64.h 00:08:43.238 TEST_HEADER include/spdk/bdev.h 00:08:43.238 TEST_HEADER include/spdk/bdev_module.h 00:08:43.238 TEST_HEADER include/spdk/bdev_zone.h 00:08:43.238 TEST_HEADER include/spdk/bit_array.h 00:08:43.238 TEST_HEADER include/spdk/bit_pool.h 00:08:43.238 TEST_HEADER include/spdk/blob_bdev.h 00:08:43.238 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:43.238 TEST_HEADER include/spdk/blobfs.h 00:08:43.238 TEST_HEADER include/spdk/blob.h 00:08:43.238 TEST_HEADER include/spdk/conf.h 00:08:43.238 TEST_HEADER include/spdk/config.h 00:08:43.238 TEST_HEADER include/spdk/cpuset.h 00:08:43.238 TEST_HEADER include/spdk/crc16.h 00:08:43.238 TEST_HEADER include/spdk/crc32.h 00:08:43.238 TEST_HEADER include/spdk/crc64.h 00:08:43.496 TEST_HEADER include/spdk/dif.h 00:08:43.496 TEST_HEADER include/spdk/dma.h 00:08:43.496 TEST_HEADER include/spdk/endian.h 00:08:43.496 TEST_HEADER include/spdk/env_dpdk.h 00:08:43.496 TEST_HEADER include/spdk/env.h 00:08:43.496 TEST_HEADER include/spdk/event.h 00:08:43.496 CC test/app/histogram_perf/histogram_perf.o 00:08:43.496 TEST_HEADER include/spdk/fd_group.h 00:08:43.496 TEST_HEADER include/spdk/fd.h 00:08:43.496 TEST_HEADER include/spdk/file.h 00:08:43.496 TEST_HEADER include/spdk/fsdev.h 00:08:43.496 TEST_HEADER include/spdk/fsdev_module.h 00:08:43.496 TEST_HEADER include/spdk/ftl.h 00:08:43.496 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:43.496 TEST_HEADER include/spdk/gpt_spec.h 00:08:43.496 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:43.496 TEST_HEADER include/spdk/hexlify.h 00:08:43.496 TEST_HEADER include/spdk/histogram_data.h 00:08:43.496 TEST_HEADER include/spdk/idxd.h 00:08:43.496 TEST_HEADER include/spdk/idxd_spec.h 00:08:43.496 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:43.496 TEST_HEADER include/spdk/init.h 00:08:43.496 TEST_HEADER include/spdk/ioat.h 00:08:43.496 TEST_HEADER include/spdk/ioat_spec.h 00:08:43.496 TEST_HEADER include/spdk/iscsi_spec.h 00:08:43.496 TEST_HEADER include/spdk/json.h 00:08:43.496 TEST_HEADER include/spdk/jsonrpc.h 00:08:43.496 TEST_HEADER include/spdk/keyring.h 00:08:43.496 TEST_HEADER include/spdk/keyring_module.h 00:08:43.496 TEST_HEADER include/spdk/likely.h 00:08:43.496 TEST_HEADER include/spdk/log.h 00:08:43.496 TEST_HEADER include/spdk/lvol.h 00:08:43.496 TEST_HEADER include/spdk/md5.h 00:08:43.496 TEST_HEADER include/spdk/memory.h 00:08:43.496 TEST_HEADER include/spdk/mmio.h 00:08:43.496 TEST_HEADER include/spdk/nbd.h 00:08:43.496 TEST_HEADER include/spdk/net.h 00:08:43.496 TEST_HEADER include/spdk/notify.h 00:08:43.496 TEST_HEADER include/spdk/nvme.h 00:08:43.496 TEST_HEADER include/spdk/nvme_intel.h 00:08:43.496 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:43.496 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:43.496 TEST_HEADER include/spdk/nvme_spec.h 00:08:43.496 TEST_HEADER include/spdk/nvme_zns.h 00:08:43.496 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:43.496 CC app/spdk_tgt/spdk_tgt.o 00:08:43.496 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:43.496 TEST_HEADER include/spdk/nvmf.h 00:08:43.496 TEST_HEADER include/spdk/nvmf_spec.h 00:08:43.496 TEST_HEADER include/spdk/nvmf_transport.h 00:08:43.496 TEST_HEADER include/spdk/opal.h 00:08:43.496 TEST_HEADER include/spdk/opal_spec.h 00:08:43.496 TEST_HEADER include/spdk/pci_ids.h 00:08:43.496 TEST_HEADER include/spdk/pipe.h 00:08:43.496 TEST_HEADER include/spdk/queue.h 00:08:43.496 TEST_HEADER include/spdk/reduce.h 00:08:43.496 TEST_HEADER include/spdk/rpc.h 00:08:43.496 TEST_HEADER include/spdk/scheduler.h 00:08:43.496 TEST_HEADER include/spdk/scsi.h 00:08:43.496 TEST_HEADER include/spdk/scsi_spec.h 00:08:43.496 TEST_HEADER include/spdk/sock.h 00:08:43.496 TEST_HEADER include/spdk/stdinc.h 00:08:43.496 TEST_HEADER include/spdk/string.h 00:08:43.496 TEST_HEADER include/spdk/thread.h 00:08:43.496 TEST_HEADER include/spdk/trace.h 00:08:43.496 TEST_HEADER include/spdk/trace_parser.h 00:08:43.496 TEST_HEADER include/spdk/tree.h 00:08:43.496 LINK test_dma 00:08:43.496 TEST_HEADER include/spdk/ublk.h 00:08:43.496 TEST_HEADER include/spdk/util.h 00:08:43.496 TEST_HEADER include/spdk/uuid.h 00:08:43.496 TEST_HEADER include/spdk/version.h 00:08:43.496 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:43.496 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:43.496 TEST_HEADER include/spdk/vhost.h 00:08:43.496 TEST_HEADER include/spdk/vmd.h 00:08:43.496 TEST_HEADER include/spdk/xor.h 00:08:43.496 TEST_HEADER include/spdk/zipf.h 00:08:43.496 CXX test/cpp_headers/accel.o 00:08:43.496 CC examples/thread/thread/thread_ex.o 00:08:43.496 LINK histogram_perf 00:08:43.754 LINK verify 00:08:43.754 CC examples/sock/hello_world/hello_sock.o 00:08:43.754 CC test/app/jsoncat/jsoncat.o 00:08:43.754 LINK spdk_tgt 00:08:43.754 CXX test/cpp_headers/accel_module.o 00:08:44.012 CXX test/cpp_headers/assert.o 00:08:44.012 LINK jsoncat 00:08:44.012 LINK thread 00:08:44.012 LINK hello_sock 00:08:44.012 CC test/event/event_perf/event_perf.o 00:08:44.012 LINK nvme_fuzz 00:08:44.012 CXX test/cpp_headers/barrier.o 00:08:44.271 CC test/env/mem_callbacks/mem_callbacks.o 00:08:44.271 CC app/spdk_lspci/spdk_lspci.o 00:08:44.271 CC app/spdk_nvme_perf/perf.o 00:08:44.271 CC app/spdk_nvme_identify/identify.o 00:08:44.271 LINK event_perf 00:08:44.271 CC app/spdk_nvme_discover/discovery_aer.o 00:08:44.271 CXX test/cpp_headers/base64.o 00:08:44.529 LINK spdk_lspci 00:08:44.529 CC examples/vmd/lsvmd/lsvmd.o 00:08:44.529 CC examples/idxd/perf/perf.o 00:08:44.529 CC test/event/reactor/reactor.o 00:08:44.529 CXX test/cpp_headers/bdev.o 00:08:44.787 LINK spdk_nvme_discover 00:08:44.787 LINK lsvmd 00:08:44.787 LINK reactor 00:08:44.787 CC examples/vmd/led/led.o 00:08:45.045 LINK mem_callbacks 00:08:45.045 CXX test/cpp_headers/bdev_module.o 00:08:45.045 CC app/spdk_top/spdk_top.o 00:08:45.045 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:45.045 LINK led 00:08:45.045 LINK idxd_perf 00:08:45.045 CC test/event/reactor_perf/reactor_perf.o 00:08:45.303 CC test/env/vtophys/vtophys.o 00:08:45.303 CXX test/cpp_headers/bdev_zone.o 00:08:45.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:45.303 LINK reactor_perf 00:08:45.303 LINK spdk_nvme_perf 00:08:45.562 LINK vtophys 00:08:45.562 CXX test/cpp_headers/bit_array.o 00:08:45.562 LINK spdk_nvme_identify 00:08:45.562 CC examples/accel/perf/accel_perf.o 00:08:45.820 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:45.820 CC test/event/app_repeat/app_repeat.o 00:08:45.820 CXX test/cpp_headers/bit_pool.o 00:08:45.820 CXX test/cpp_headers/blob_bdev.o 00:08:45.820 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:45.820 LINK app_repeat 00:08:45.820 LINK iscsi_fuzz 00:08:45.820 LINK vhost_fuzz 00:08:46.078 CC examples/blob/hello_world/hello_blob.o 00:08:46.078 LINK env_dpdk_post_init 00:08:46.078 CXX test/cpp_headers/blobfs_bdev.o 00:08:46.078 LINK hello_fsdev 00:08:46.336 LINK hello_blob 00:08:46.336 CC examples/nvme/hello_world/hello_world.o 00:08:46.336 CC examples/blob/cli/blobcli.o 00:08:46.336 CC test/event/scheduler/scheduler.o 00:08:46.336 CXX test/cpp_headers/blobfs.o 00:08:46.336 CC test/env/memory/memory_ut.o 00:08:46.336 CC test/app/stub/stub.o 00:08:46.336 CXX test/cpp_headers/blob.o 00:08:46.336 LINK spdk_top 00:08:46.594 LINK accel_perf 00:08:46.594 CXX test/cpp_headers/conf.o 00:08:46.594 CXX test/cpp_headers/config.o 00:08:46.594 LINK stub 00:08:46.594 LINK scheduler 00:08:46.851 CC app/spdk_dd/spdk_dd.o 00:08:46.851 LINK hello_world 00:08:46.851 CC app/vhost/vhost.o 00:08:46.851 CXX test/cpp_headers/cpuset.o 00:08:46.851 CC test/rpc_client/rpc_client_test.o 00:08:46.851 CC app/fio/nvme/fio_plugin.o 00:08:47.110 CC test/env/pci/pci_ut.o 00:08:47.110 LINK vhost 00:08:47.110 CC examples/nvme/reconnect/reconnect.o 00:08:47.110 CXX test/cpp_headers/crc16.o 00:08:47.367 LINK rpc_client_test 00:08:47.367 LINK spdk_dd 00:08:47.367 LINK blobcli 00:08:47.367 CC examples/bdev/hello_world/hello_bdev.o 00:08:47.625 CXX test/cpp_headers/crc32.o 00:08:47.625 CC examples/bdev/bdevperf/bdevperf.o 00:08:47.882 CXX test/cpp_headers/crc64.o 00:08:47.882 LINK pci_ut 00:08:47.882 LINK reconnect 00:08:47.882 LINK hello_bdev 00:08:47.882 CC test/blobfs/mkfs/mkfs.o 00:08:47.882 CC test/accel/dif/dif.o 00:08:47.882 LINK spdk_nvme 00:08:48.140 CXX test/cpp_headers/dif.o 00:08:48.140 CC test/lvol/esnap/esnap.o 00:08:48.140 LINK memory_ut 00:08:48.397 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:48.397 CC app/fio/bdev/fio_plugin.o 00:08:48.397 CXX test/cpp_headers/dma.o 00:08:48.397 LINK mkfs 00:08:48.397 CC examples/nvme/arbitration/arbitration.o 00:08:48.655 CC test/nvme/aer/aer.o 00:08:48.655 CXX test/cpp_headers/endian.o 00:08:48.655 CC test/nvme/reset/reset.o 00:08:48.655 CXX test/cpp_headers/env_dpdk.o 00:08:48.912 CC test/nvme/sgl/sgl.o 00:08:48.912 CXX test/cpp_headers/env.o 00:08:48.912 LINK arbitration 00:08:49.170 LINK reset 00:08:49.170 CXX test/cpp_headers/event.o 00:08:49.170 CXX test/cpp_headers/fd_group.o 00:08:49.428 LINK aer 00:08:49.428 LINK dif 00:08:49.428 LINK nvme_manage 00:08:49.428 LINK spdk_bdev 00:08:49.428 LINK bdevperf 00:08:49.428 LINK sgl 00:08:49.428 CXX test/cpp_headers/fd.o 00:08:49.686 CC test/nvme/e2edp/nvme_dp.o 00:08:49.686 CC test/nvme/overhead/overhead.o 00:08:49.686 CC examples/nvme/hotplug/hotplug.o 00:08:49.686 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:49.686 CXX test/cpp_headers/file.o 00:08:49.686 CC test/nvme/err_injection/err_injection.o 00:08:49.686 CXX test/cpp_headers/fsdev.o 00:08:49.686 CC examples/nvme/abort/abort.o 00:08:49.686 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:49.944 LINK cmb_copy 00:08:49.944 CXX test/cpp_headers/fsdev_module.o 00:08:49.944 LINK hotplug 00:08:49.944 CC test/nvme/startup/startup.o 00:08:49.944 LINK nvme_dp 00:08:49.944 LINK err_injection 00:08:49.944 LINK overhead 00:08:49.944 LINK pmr_persistence 00:08:49.944 CXX test/cpp_headers/ftl.o 00:08:50.203 CXX test/cpp_headers/fuse_dispatcher.o 00:08:50.203 LINK startup 00:08:50.203 LINK abort 00:08:50.203 CC test/nvme/reserve/reserve.o 00:08:50.203 CC test/nvme/simple_copy/simple_copy.o 00:08:50.203 CC test/nvme/connect_stress/connect_stress.o 00:08:50.203 CC test/bdev/bdevio/bdevio.o 00:08:50.203 CC test/nvme/boot_partition/boot_partition.o 00:08:50.462 CXX test/cpp_headers/gpt_spec.o 00:08:50.462 CC test/nvme/compliance/nvme_compliance.o 00:08:50.462 CC test/nvme/fused_ordering/fused_ordering.o 00:08:50.462 LINK boot_partition 00:08:50.462 LINK connect_stress 00:08:50.462 LINK reserve 00:08:50.462 CXX test/cpp_headers/hexlify.o 00:08:50.462 LINK simple_copy 00:08:50.790 CC examples/nvmf/nvmf/nvmf.o 00:08:50.790 CXX test/cpp_headers/histogram_data.o 00:08:50.790 LINK fused_ordering 00:08:50.790 CXX test/cpp_headers/idxd.o 00:08:50.790 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:50.790 LINK bdevio 00:08:50.790 CC test/nvme/cuse/cuse.o 00:08:50.790 CC test/nvme/fdp/fdp.o 00:08:50.790 LINK nvme_compliance 00:08:51.063 CXX test/cpp_headers/idxd_spec.o 00:08:51.063 CXX test/cpp_headers/init.o 00:08:51.063 CXX test/cpp_headers/ioat.o 00:08:51.063 CXX test/cpp_headers/ioat_spec.o 00:08:51.063 CXX test/cpp_headers/iscsi_spec.o 00:08:51.063 CXX test/cpp_headers/json.o 00:08:51.063 LINK nvmf 00:08:51.063 CXX test/cpp_headers/jsonrpc.o 00:08:51.063 LINK doorbell_aers 00:08:51.063 CXX test/cpp_headers/keyring.o 00:08:51.322 CXX test/cpp_headers/keyring_module.o 00:08:51.322 CXX test/cpp_headers/likely.o 00:08:51.322 LINK fdp 00:08:51.322 CXX test/cpp_headers/log.o 00:08:51.322 CXX test/cpp_headers/lvol.o 00:08:51.322 CXX test/cpp_headers/md5.o 00:08:51.322 CXX test/cpp_headers/memory.o 00:08:51.322 CXX test/cpp_headers/mmio.o 00:08:51.580 CXX test/cpp_headers/nbd.o 00:08:51.580 CXX test/cpp_headers/net.o 00:08:51.580 CXX test/cpp_headers/notify.o 00:08:51.580 CXX test/cpp_headers/nvme.o 00:08:51.580 CXX test/cpp_headers/nvme_intel.o 00:08:51.580 CXX test/cpp_headers/nvme_ocssd.o 00:08:51.580 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:51.580 CXX test/cpp_headers/nvme_spec.o 00:08:51.580 CXX test/cpp_headers/nvme_zns.o 00:08:51.580 CXX test/cpp_headers/nvmf_cmd.o 00:08:51.580 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:51.838 CXX test/cpp_headers/nvmf.o 00:08:51.838 CXX test/cpp_headers/nvmf_spec.o 00:08:51.838 CXX test/cpp_headers/nvmf_transport.o 00:08:51.838 CXX test/cpp_headers/opal.o 00:08:51.838 CXX test/cpp_headers/opal_spec.o 00:08:51.838 CXX test/cpp_headers/pci_ids.o 00:08:51.838 CXX test/cpp_headers/pipe.o 00:08:51.838 CXX test/cpp_headers/queue.o 00:08:51.838 CXX test/cpp_headers/reduce.o 00:08:51.838 CXX test/cpp_headers/rpc.o 00:08:51.838 CXX test/cpp_headers/scheduler.o 00:08:52.096 CXX test/cpp_headers/scsi.o 00:08:52.096 CXX test/cpp_headers/scsi_spec.o 00:08:52.096 CXX test/cpp_headers/sock.o 00:08:52.096 CXX test/cpp_headers/stdinc.o 00:08:52.096 CXX test/cpp_headers/string.o 00:08:52.096 CXX test/cpp_headers/thread.o 00:08:52.096 CXX test/cpp_headers/trace.o 00:08:52.096 CXX test/cpp_headers/trace_parser.o 00:08:52.096 CXX test/cpp_headers/tree.o 00:08:52.356 CXX test/cpp_headers/ublk.o 00:08:52.356 CXX test/cpp_headers/util.o 00:08:52.356 CXX test/cpp_headers/uuid.o 00:08:52.356 CXX test/cpp_headers/version.o 00:08:52.356 CXX test/cpp_headers/vfio_user_pci.o 00:08:52.356 CXX test/cpp_headers/vfio_user_spec.o 00:08:52.356 CXX test/cpp_headers/vhost.o 00:08:52.356 CXX test/cpp_headers/vmd.o 00:08:52.356 CXX test/cpp_headers/xor.o 00:08:52.356 CXX test/cpp_headers/zipf.o 00:08:52.356 LINK cuse 00:08:56.551 LINK esnap 00:08:56.551 ************************************ 00:08:56.551 END TEST make 00:08:56.551 ************************************ 00:08:56.551 00:08:56.551 real 1m37.299s 00:08:56.551 user 8m52.222s 00:08:56.551 sys 2m1.141s 00:08:56.551 15:20:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:56.551 15:20:42 make -- common/autotest_common.sh@10 -- $ set +x 00:08:56.551 15:20:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:56.551 15:20:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:56.551 15:20:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:56.551 15:20:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.551 15:20:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:56.551 15:20:42 -- pm/common@44 -- $ pid=5349 00:08:56.551 15:20:42 -- pm/common@50 -- $ kill -TERM 5349 00:08:56.551 15:20:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.551 15:20:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:56.551 15:20:42 -- pm/common@44 -- $ pid=5351 00:08:56.551 15:20:42 -- pm/common@50 -- $ kill -TERM 5351 00:08:56.551 15:20:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:56.551 15:20:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:56.551 15:20:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.551 15:20:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.551 15:20:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.821 15:20:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.821 15:20:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.821 15:20:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.821 15:20:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.821 15:20:42 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.821 15:20:42 -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.821 15:20:42 -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.821 15:20:42 -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.821 15:20:42 -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.821 15:20:42 -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.821 15:20:42 -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.821 15:20:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.821 15:20:42 -- scripts/common.sh@344 -- # case "$op" in 00:08:56.821 15:20:42 -- scripts/common.sh@345 -- # : 1 00:08:56.821 15:20:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.821 15:20:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.821 15:20:42 -- scripts/common.sh@365 -- # decimal 1 00:08:56.821 15:20:42 -- scripts/common.sh@353 -- # local d=1 00:08:56.821 15:20:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.821 15:20:42 -- scripts/common.sh@355 -- # echo 1 00:08:56.821 15:20:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.821 15:20:42 -- scripts/common.sh@366 -- # decimal 2 00:08:56.821 15:20:42 -- scripts/common.sh@353 -- # local d=2 00:08:56.821 15:20:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.821 15:20:42 -- scripts/common.sh@355 -- # echo 2 00:08:56.821 15:20:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.821 15:20:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.821 15:20:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.821 15:20:42 -- scripts/common.sh@368 -- # return 0 00:08:56.821 15:20:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.821 15:20:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.821 --rc genhtml_branch_coverage=1 00:08:56.821 --rc genhtml_function_coverage=1 00:08:56.821 --rc genhtml_legend=1 00:08:56.821 --rc geninfo_all_blocks=1 00:08:56.821 --rc geninfo_unexecuted_blocks=1 00:08:56.821 00:08:56.821 ' 00:08:56.821 15:20:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.821 --rc genhtml_branch_coverage=1 00:08:56.821 --rc genhtml_function_coverage=1 00:08:56.821 --rc genhtml_legend=1 00:08:56.821 --rc geninfo_all_blocks=1 00:08:56.821 --rc geninfo_unexecuted_blocks=1 00:08:56.821 00:08:56.821 ' 00:08:56.821 15:20:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.821 --rc genhtml_branch_coverage=1 00:08:56.821 --rc genhtml_function_coverage=1 00:08:56.821 --rc genhtml_legend=1 00:08:56.821 --rc geninfo_all_blocks=1 00:08:56.821 --rc geninfo_unexecuted_blocks=1 00:08:56.821 00:08:56.821 ' 00:08:56.821 15:20:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.821 --rc genhtml_branch_coverage=1 00:08:56.821 --rc genhtml_function_coverage=1 00:08:56.821 --rc genhtml_legend=1 00:08:56.821 --rc geninfo_all_blocks=1 00:08:56.821 --rc geninfo_unexecuted_blocks=1 00:08:56.821 00:08:56.821 ' 00:08:56.821 15:20:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.821 15:20:42 -- nvmf/common.sh@7 -- # uname -s 00:08:56.821 15:20:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.821 15:20:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.821 15:20:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.821 15:20:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.821 15:20:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.821 15:20:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.821 15:20:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.821 15:20:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.821 15:20:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.821 15:20:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.821 15:20:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:08:56.821 15:20:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:08:56.821 15:20:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.821 15:20:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.821 15:20:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:56.821 15:20:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.821 15:20:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.821 15:20:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.821 15:20:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.821 15:20:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.821 15:20:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.821 15:20:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.821 15:20:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.821 15:20:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.821 15:20:42 -- paths/export.sh@5 -- # export PATH 00:08:56.821 15:20:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.821 15:20:42 -- nvmf/common.sh@51 -- # : 0 00:08:56.821 15:20:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.821 15:20:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.821 15:20:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.821 15:20:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.821 15:20:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.821 15:20:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.821 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.821 15:20:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.821 15:20:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.821 15:20:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.821 15:20:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:56.821 15:20:42 -- spdk/autotest.sh@32 -- # uname -s 00:08:56.821 15:20:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:56.821 15:20:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:56.821 15:20:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:56.821 15:20:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:56.821 15:20:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:56.821 15:20:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:56.821 15:20:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:56.821 15:20:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:56.821 15:20:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54909 00:08:56.821 15:20:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:56.821 15:20:42 -- pm/common@17 -- # local monitor 00:08:56.821 15:20:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.821 15:20:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:56.821 15:20:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.821 15:20:42 -- pm/common@25 -- # sleep 1 00:08:56.821 15:20:42 -- pm/common@21 -- # date +%s 00:08:56.821 15:20:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732116042 00:08:56.821 15:20:42 -- pm/common@21 -- # date +%s 00:08:56.821 15:20:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732116042 00:08:56.821 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732116042_collect-cpu-load.pm.log 00:08:56.821 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732116042_collect-vmstat.pm.log 00:08:57.756 15:20:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:57.756 15:20:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:57.756 15:20:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.756 15:20:43 -- common/autotest_common.sh@10 -- # set +x 00:08:57.756 15:20:43 -- spdk/autotest.sh@59 -- # create_test_list 00:08:57.756 15:20:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:57.756 15:20:43 -- common/autotest_common.sh@10 -- # set +x 00:08:57.756 15:20:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:57.756 15:20:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:57.756 15:20:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:57.756 15:20:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:57.756 15:20:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:57.756 15:20:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:57.756 15:20:43 -- common/autotest_common.sh@1457 -- # uname 00:08:57.756 15:20:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:57.756 15:20:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:57.756 15:20:43 -- common/autotest_common.sh@1477 -- # uname 00:08:57.756 15:20:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:57.756 15:20:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:57.756 15:20:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:58.016 lcov: LCOV version 1.15 00:08:58.016 15:20:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:16.171 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:16.171 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:31.058 15:21:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:31.058 15:21:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.058 15:21:16 -- common/autotest_common.sh@10 -- # set +x 00:09:31.058 15:21:16 -- spdk/autotest.sh@78 -- # rm -f 00:09:31.058 15:21:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.885 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:31.885 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:31.885 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:09:31.885 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:09:31.885 15:21:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:31.885 15:21:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:31.885 15:21:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:31.885 15:21:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.885 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:31.885 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:31.885 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.886 15:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.886 15:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:09:31.886 15:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:09:31.886 15:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:31.886 15:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.886 15:21:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:31.886 15:21:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:31.886 15:21:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:31.886 15:21:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:31.886 15:21:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:31.886 15:21:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:32.144 No valid GPT data, bailing 00:09:32.144 15:21:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:32.144 15:21:17 -- scripts/common.sh@394 -- # pt= 00:09:32.144 15:21:17 -- scripts/common.sh@395 -- # return 1 00:09:32.144 15:21:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:32.144 1+0 records in 00:09:32.144 1+0 records out 00:09:32.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128459 s, 81.6 MB/s 00:09:32.144 15:21:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:32.144 15:21:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:32.144 15:21:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:32.144 15:21:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:32.144 15:21:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:32.144 No valid GPT data, bailing 00:09:32.144 15:21:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:32.144 15:21:17 -- scripts/common.sh@394 -- # pt= 00:09:32.144 15:21:17 -- scripts/common.sh@395 -- # return 1 00:09:32.144 15:21:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:32.144 1+0 records in 00:09:32.144 1+0 records out 00:09:32.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401211 s, 261 MB/s 00:09:32.144 15:21:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:32.144 15:21:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:32.144 15:21:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:09:32.144 15:21:17 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:09:32.144 15:21:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:09:32.144 No valid GPT data, bailing 00:09:32.144 15:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:09:32.144 15:21:18 -- scripts/common.sh@394 -- # pt= 00:09:32.144 15:21:18 -- scripts/common.sh@395 -- # return 1 00:09:32.144 15:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:09:32.144 1+0 records in 00:09:32.144 1+0 records out 00:09:32.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534482 s, 196 MB/s 00:09:32.144 15:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:32.144 15:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:32.144 15:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:09:32.144 15:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:09:32.144 15:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:09:32.403 No valid GPT data, bailing 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # pt= 00:09:32.403 15:21:18 -- scripts/common.sh@395 -- # return 1 00:09:32.403 15:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:09:32.403 1+0 records in 00:09:32.403 1+0 records out 00:09:32.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562978 s, 186 MB/s 00:09:32.403 15:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:32.403 15:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:32.403 15:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:09:32.403 15:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:09:32.403 15:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:09:32.403 No valid GPT data, bailing 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # pt= 00:09:32.403 15:21:18 -- scripts/common.sh@395 -- # return 1 00:09:32.403 15:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:09:32.403 1+0 records in 00:09:32.403 1+0 records out 00:09:32.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055836 s, 188 MB/s 00:09:32.403 15:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:32.403 15:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:32.403 15:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:09:32.403 15:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:09:32.403 15:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:09:32.403 No valid GPT data, bailing 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:09:32.403 15:21:18 -- scripts/common.sh@394 -- # pt= 00:09:32.403 15:21:18 -- scripts/common.sh@395 -- # return 1 00:09:32.403 15:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:09:32.403 1+0 records in 00:09:32.403 1+0 records out 00:09:32.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385372 s, 272 MB/s 00:09:32.403 15:21:18 -- spdk/autotest.sh@105 -- # sync 00:09:32.662 15:21:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:32.662 15:21:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:32.662 15:21:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:34.640 15:21:20 -- spdk/autotest.sh@111 -- # uname -s 00:09:34.640 15:21:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:34.640 15:21:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:34.640 15:21:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:35.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.776 Hugepages 00:09:35.776 node hugesize free / total 00:09:35.776 node0 1048576kB 0 / 0 00:09:35.776 node0 2048kB 0 / 0 00:09:35.776 00:09:35.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:35.776 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:36.034 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:36.034 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:36.293 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:09:36.293 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:36.293 15:21:22 -- spdk/autotest.sh@117 -- # uname -s 00:09:36.293 15:21:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:36.293 15:21:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:36.293 15:21:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.428 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.428 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.688 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.688 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.688 15:21:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:38.625 15:21:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:38.625 15:21:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:38.884 15:21:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.884 15:21:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:38.884 15:21:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.884 15:21:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.884 15:21:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.884 15:21:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.884 15:21:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.884 15:21:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:38.884 15:21:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.884 15:21:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:39.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.401 Waiting for block devices as requested 00:09:39.401 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.660 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.919 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.309 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:45.309 15:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:45.309 15:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1543 -- # continue 00:09:45.309 15:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:45.309 15:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1543 -- # continue 00:09:45.309 15:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:45.309 15:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1543 -- # continue 00:09:45.309 15:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:09:45.309 15:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:45.309 15:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:45.309 15:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:45.309 15:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:45.309 15:21:30 -- common/autotest_common.sh@1543 -- # continue 00:09:45.309 15:21:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:45.309 15:21:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.309 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:09:45.309 15:21:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:45.309 15:21:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.309 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:09:45.309 15:21:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.445 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.445 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.445 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.445 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.705 15:21:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:46.705 15:21:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.705 15:21:32 -- common/autotest_common.sh@10 -- # set +x 00:09:46.705 15:21:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:46.705 15:21:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:46.705 15:21:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:46.705 15:21:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:46.705 15:21:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:46.705 15:21:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:46.705 15:21:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:46.705 15:21:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:46.705 15:21:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:46.705 15:21:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:46.705 15:21:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.705 15:21:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.705 15:21:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:46.705 15:21:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:46.705 15:21:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.705 15:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:46.705 15:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:46.705 15:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:46.705 15:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:46.705 15:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:46.705 15:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:46.705 15:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:09:46.705 15:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:46.705 15:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:46.705 15:21:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:46.705 15:21:32 -- common/autotest_common.sh@1572 -- # return 0 00:09:46.705 15:21:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:46.705 15:21:32 -- common/autotest_common.sh@1580 -- # return 0 00:09:46.705 15:21:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:46.705 15:21:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:46.705 15:21:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:46.705 15:21:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:46.705 15:21:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:46.705 15:21:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.705 15:21:32 -- common/autotest_common.sh@10 -- # set +x 00:09:46.705 15:21:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:46.705 15:21:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:46.705 15:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.705 15:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.705 15:21:32 -- common/autotest_common.sh@10 -- # set +x 00:09:46.705 ************************************ 00:09:46.705 START TEST env 00:09:46.705 ************************************ 00:09:46.705 15:21:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:46.965 * Looking for test storage... 00:09:46.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:46.965 15:21:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.965 15:21:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.965 15:21:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.965 15:21:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.965 15:21:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.965 15:21:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.965 15:21:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.965 15:21:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.965 15:21:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.965 15:21:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.965 15:21:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.965 15:21:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.965 15:21:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.965 15:21:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.965 15:21:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.965 15:21:32 env -- scripts/common.sh@344 -- # case "$op" in 00:09:46.965 15:21:32 env -- scripts/common.sh@345 -- # : 1 00:09:46.965 15:21:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.965 15:21:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.965 15:21:32 env -- scripts/common.sh@365 -- # decimal 1 00:09:46.965 15:21:32 env -- scripts/common.sh@353 -- # local d=1 00:09:46.965 15:21:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.965 15:21:32 env -- scripts/common.sh@355 -- # echo 1 00:09:46.965 15:21:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.965 15:21:32 env -- scripts/common.sh@366 -- # decimal 2 00:09:46.965 15:21:32 env -- scripts/common.sh@353 -- # local d=2 00:09:46.965 15:21:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.965 15:21:32 env -- scripts/common.sh@355 -- # echo 2 00:09:46.965 15:21:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.965 15:21:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.965 15:21:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.965 15:21:32 env -- scripts/common.sh@368 -- # return 0 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.966 --rc genhtml_branch_coverage=1 00:09:46.966 --rc genhtml_function_coverage=1 00:09:46.966 --rc genhtml_legend=1 00:09:46.966 --rc geninfo_all_blocks=1 00:09:46.966 --rc geninfo_unexecuted_blocks=1 00:09:46.966 00:09:46.966 ' 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.966 --rc genhtml_branch_coverage=1 00:09:46.966 --rc genhtml_function_coverage=1 00:09:46.966 --rc genhtml_legend=1 00:09:46.966 --rc geninfo_all_blocks=1 00:09:46.966 --rc geninfo_unexecuted_blocks=1 00:09:46.966 00:09:46.966 ' 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.966 --rc genhtml_branch_coverage=1 00:09:46.966 --rc genhtml_function_coverage=1 00:09:46.966 --rc genhtml_legend=1 00:09:46.966 --rc geninfo_all_blocks=1 00:09:46.966 --rc geninfo_unexecuted_blocks=1 00:09:46.966 00:09:46.966 ' 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.966 --rc genhtml_branch_coverage=1 00:09:46.966 --rc genhtml_function_coverage=1 00:09:46.966 --rc genhtml_legend=1 00:09:46.966 --rc geninfo_all_blocks=1 00:09:46.966 --rc geninfo_unexecuted_blocks=1 00:09:46.966 00:09:46.966 ' 00:09:46.966 15:21:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.966 15:21:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.966 15:21:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:46.966 ************************************ 00:09:46.966 START TEST env_memory 00:09:46.966 ************************************ 00:09:46.966 15:21:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:46.966 00:09:46.966 00:09:46.966 CUnit - A unit testing framework for C - Version 2.1-3 00:09:46.966 http://cunit.sourceforge.net/ 00:09:46.966 00:09:46.966 00:09:46.966 Suite: memory 00:09:47.225 Test: alloc and free memory map ...[2024-11-20 15:21:32.930911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:47.225 passed 00:09:47.225 Test: mem map translation ...[2024-11-20 15:21:33.002925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:47.225 [2024-11-20 15:21:33.003002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:47.225 [2024-11-20 15:21:33.003111] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:47.225 [2024-11-20 15:21:33.003142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:47.225 passed 00:09:47.225 Test: mem map registration ...[2024-11-20 15:21:33.114481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:47.225 [2024-11-20 15:21:33.114562] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:47.225 passed 00:09:47.547 Test: mem map adjacent registrations ...passed 00:09:47.547 00:09:47.547 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.547 suites 1 1 n/a 0 0 00:09:47.547 tests 4 4 4 0 0 00:09:47.547 asserts 152 152 152 0 n/a 00:09:47.547 00:09:47.547 Elapsed time = 0.336 seconds 00:09:47.547 00:09:47.547 real 0m0.384s 00:09:47.547 user 0m0.336s 00:09:47.547 sys 0m0.040s 00:09:47.547 15:21:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.547 ************************************ 00:09:47.547 END TEST env_memory 00:09:47.547 ************************************ 00:09:47.547 15:21:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:47.547 15:21:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:47.547 15:21:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.547 15:21:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.547 15:21:33 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.547 ************************************ 00:09:47.547 START TEST env_vtophys 00:09:47.547 ************************************ 00:09:47.547 15:21:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:47.547 EAL: lib.eal log level changed from notice to debug 00:09:47.547 EAL: Detected lcore 0 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 1 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 2 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 3 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 4 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 5 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 6 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 7 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 8 as core 0 on socket 0 00:09:47.547 EAL: Detected lcore 9 as core 0 on socket 0 00:09:47.547 EAL: Maximum logical cores by configuration: 128 00:09:47.547 EAL: Detected CPU lcores: 10 00:09:47.547 EAL: Detected NUMA nodes: 1 00:09:47.547 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:47.547 EAL: Detected shared linkage of DPDK 00:09:47.547 EAL: No shared files mode enabled, IPC will be disabled 00:09:47.547 EAL: Selected IOVA mode 'PA' 00:09:47.547 EAL: Probing VFIO support... 00:09:47.547 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:47.547 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:47.547 EAL: Ask a virtual area of 0x2e000 bytes 00:09:47.547 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:47.547 EAL: Setting up physically contiguous memory... 00:09:47.547 EAL: Setting maximum number of open files to 524288 00:09:47.547 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:47.547 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:47.547 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.547 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:47.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.547 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.547 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:47.547 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:47.547 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.547 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:47.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.547 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.547 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:47.547 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:47.547 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.547 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:47.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.547 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.547 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:47.547 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:47.547 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.547 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:47.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.547 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.547 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:47.547 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:47.547 EAL: Hugepages will be freed exactly as allocated. 00:09:47.547 EAL: No shared files mode enabled, IPC is disabled 00:09:47.547 EAL: No shared files mode enabled, IPC is disabled 00:09:47.831 EAL: TSC frequency is ~2100000 KHz 00:09:47.831 EAL: Main lcore 0 is ready (tid=7f392a0f4a40;cpuset=[0]) 00:09:47.831 EAL: Trying to obtain current memory policy. 00:09:47.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.831 EAL: Restoring previous memory policy: 0 00:09:47.831 EAL: request: mp_malloc_sync 00:09:47.831 EAL: No shared files mode enabled, IPC is disabled 00:09:47.831 EAL: Heap on socket 0 was expanded by 2MB 00:09:47.831 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:47.831 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:47.831 EAL: Mem event callback 'spdk:(nil)' registered 00:09:47.831 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:47.831 00:09:47.831 00:09:47.831 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.831 http://cunit.sourceforge.net/ 00:09:47.831 00:09:47.831 00:09:47.831 Suite: components_suite 00:09:48.089 Test: vtophys_malloc_test ...passed 00:09:48.089 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:48.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.089 EAL: Restoring previous memory policy: 4 00:09:48.089 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.089 EAL: request: mp_malloc_sync 00:09:48.089 EAL: No shared files mode enabled, IPC is disabled 00:09:48.089 EAL: Heap on socket 0 was expanded by 4MB 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was shrunk by 4MB 00:09:48.348 EAL: Trying to obtain current memory policy. 00:09:48.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.348 EAL: Restoring previous memory policy: 4 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was expanded by 6MB 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was shrunk by 6MB 00:09:48.348 EAL: Trying to obtain current memory policy. 00:09:48.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.348 EAL: Restoring previous memory policy: 4 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was expanded by 10MB 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was shrunk by 10MB 00:09:48.348 EAL: Trying to obtain current memory policy. 00:09:48.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.348 EAL: Restoring previous memory policy: 4 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was expanded by 18MB 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.348 EAL: Heap on socket 0 was shrunk by 18MB 00:09:48.348 EAL: Trying to obtain current memory policy. 00:09:48.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.348 EAL: Restoring previous memory policy: 4 00:09:48.348 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.348 EAL: request: mp_malloc_sync 00:09:48.348 EAL: No shared files mode enabled, IPC is disabled 00:09:48.349 EAL: Heap on socket 0 was expanded by 34MB 00:09:48.349 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.349 EAL: request: mp_malloc_sync 00:09:48.349 EAL: No shared files mode enabled, IPC is disabled 00:09:48.349 EAL: Heap on socket 0 was shrunk by 34MB 00:09:48.349 EAL: Trying to obtain current memory policy. 00:09:48.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.607 EAL: Restoring previous memory policy: 4 00:09:48.607 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.607 EAL: request: mp_malloc_sync 00:09:48.607 EAL: No shared files mode enabled, IPC is disabled 00:09:48.607 EAL: Heap on socket 0 was expanded by 66MB 00:09:48.607 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.607 EAL: request: mp_malloc_sync 00:09:48.607 EAL: No shared files mode enabled, IPC is disabled 00:09:48.607 EAL: Heap on socket 0 was shrunk by 66MB 00:09:48.607 EAL: Trying to obtain current memory policy. 00:09:48.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.867 EAL: Restoring previous memory policy: 4 00:09:48.867 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.867 EAL: request: mp_malloc_sync 00:09:48.867 EAL: No shared files mode enabled, IPC is disabled 00:09:48.867 EAL: Heap on socket 0 was expanded by 130MB 00:09:48.867 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.126 EAL: request: mp_malloc_sync 00:09:49.126 EAL: No shared files mode enabled, IPC is disabled 00:09:49.126 EAL: Heap on socket 0 was shrunk by 130MB 00:09:49.126 EAL: Trying to obtain current memory policy. 00:09:49.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.385 EAL: Restoring previous memory policy: 4 00:09:49.385 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.385 EAL: request: mp_malloc_sync 00:09:49.385 EAL: No shared files mode enabled, IPC is disabled 00:09:49.385 EAL: Heap on socket 0 was expanded by 258MB 00:09:49.644 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.644 EAL: request: mp_malloc_sync 00:09:49.644 EAL: No shared files mode enabled, IPC is disabled 00:09:49.644 EAL: Heap on socket 0 was shrunk by 258MB 00:09:50.211 EAL: Trying to obtain current memory policy. 00:09:50.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:50.211 EAL: Restoring previous memory policy: 4 00:09:50.211 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.211 EAL: request: mp_malloc_sync 00:09:50.211 EAL: No shared files mode enabled, IPC is disabled 00:09:50.211 EAL: Heap on socket 0 was expanded by 514MB 00:09:51.151 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.409 EAL: request: mp_malloc_sync 00:09:51.409 EAL: No shared files mode enabled, IPC is disabled 00:09:51.409 EAL: Heap on socket 0 was shrunk by 514MB 00:09:52.344 EAL: Trying to obtain current memory policy. 00:09:52.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:52.344 EAL: Restoring previous memory policy: 4 00:09:52.344 EAL: Calling mem event callback 'spdk:(nil)' 00:09:52.344 EAL: request: mp_malloc_sync 00:09:52.344 EAL: No shared files mode enabled, IPC is disabled 00:09:52.344 EAL: Heap on socket 0 was expanded by 1026MB 00:09:54.250 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.509 EAL: request: mp_malloc_sync 00:09:54.509 EAL: No shared files mode enabled, IPC is disabled 00:09:54.509 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:56.438 passed 00:09:56.438 00:09:56.438 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.438 suites 1 1 n/a 0 0 00:09:56.438 tests 2 2 2 0 0 00:09:56.438 asserts 5684 5684 5684 0 n/a 00:09:56.438 00:09:56.438 Elapsed time = 8.361 seconds 00:09:56.438 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.438 EAL: request: mp_malloc_sync 00:09:56.438 EAL: No shared files mode enabled, IPC is disabled 00:09:56.438 EAL: Heap on socket 0 was shrunk by 2MB 00:09:56.438 EAL: No shared files mode enabled, IPC is disabled 00:09:56.438 EAL: No shared files mode enabled, IPC is disabled 00:09:56.438 EAL: No shared files mode enabled, IPC is disabled 00:09:56.438 00:09:56.438 real 0m8.728s 00:09:56.438 user 0m7.624s 00:09:56.438 sys 0m0.941s 00:09:56.438 ************************************ 00:09:56.438 END TEST env_vtophys 00:09:56.438 ************************************ 00:09:56.438 15:21:42 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.438 15:21:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 15:21:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:56.438 15:21:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.438 15:21:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.438 15:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 ************************************ 00:09:56.438 START TEST env_pci 00:09:56.438 ************************************ 00:09:56.438 15:21:42 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:56.438 00:09:56.438 00:09:56.438 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.438 http://cunit.sourceforge.net/ 00:09:56.438 00:09:56.438 00:09:56.438 Suite: pci 00:09:56.438 Test: pci_hook ...[2024-11-20 15:21:42.126827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57793 has claimed it 00:09:56.438 EAL: Cannot find device (10000:00:01.0) 00:09:56.438 passed 00:09:56.438 00:09:56.438 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.438 suites 1 1 n/a 0 0 00:09:56.438 tests 1 1 1 0 0 00:09:56.438 asserts 25 25 25 0 n/a 00:09:56.438 00:09:56.438 Elapsed time = 0.009 seconds 00:09:56.438 EAL: Failed to attach device on primary process 00:09:56.438 00:09:56.438 real 0m0.105s 00:09:56.438 user 0m0.043s 00:09:56.438 sys 0m0.061s 00:09:56.438 15:21:42 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.438 ************************************ 00:09:56.438 END TEST env_pci 00:09:56.438 ************************************ 00:09:56.438 15:21:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 15:21:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:56.438 15:21:42 env -- env/env.sh@15 -- # uname 00:09:56.438 15:21:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:56.438 15:21:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:56.438 15:21:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:56.438 15:21:42 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:56.438 15:21:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.438 15:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 ************************************ 00:09:56.438 START TEST env_dpdk_post_init 00:09:56.438 ************************************ 00:09:56.438 15:21:42 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:56.438 EAL: Detected CPU lcores: 10 00:09:56.438 EAL: Detected NUMA nodes: 1 00:09:56.438 EAL: Detected shared linkage of DPDK 00:09:56.438 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:56.438 EAL: Selected IOVA mode 'PA' 00:09:56.697 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:56.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:56.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:56.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:56.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:56.697 Starting DPDK initialization... 00:09:56.697 Starting SPDK post initialization... 00:09:56.697 SPDK NVMe probe 00:09:56.697 Attaching to 0000:00:10.0 00:09:56.697 Attaching to 0000:00:11.0 00:09:56.697 Attaching to 0000:00:12.0 00:09:56.697 Attaching to 0000:00:13.0 00:09:56.697 Attached to 0000:00:10.0 00:09:56.697 Attached to 0000:00:11.0 00:09:56.697 Attached to 0000:00:13.0 00:09:56.697 Attached to 0000:00:12.0 00:09:56.697 Cleaning up... 00:09:56.697 00:09:56.697 real 0m0.334s 00:09:56.697 user 0m0.111s 00:09:56.697 sys 0m0.127s 00:09:56.697 ************************************ 00:09:56.697 END TEST env_dpdk_post_init 00:09:56.697 ************************************ 00:09:56.698 15:21:42 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.698 15:21:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:56.698 15:21:42 env -- env/env.sh@26 -- # uname 00:09:56.698 15:21:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:56.698 15:21:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:56.698 15:21:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.698 15:21:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.698 15:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:09:56.698 ************************************ 00:09:56.698 START TEST env_mem_callbacks 00:09:56.698 ************************************ 00:09:56.698 15:21:42 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:56.955 EAL: Detected CPU lcores: 10 00:09:56.955 EAL: Detected NUMA nodes: 1 00:09:56.955 EAL: Detected shared linkage of DPDK 00:09:56.955 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:56.955 EAL: Selected IOVA mode 'PA' 00:09:56.955 00:09:56.955 00:09:56.955 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.955 http://cunit.sourceforge.net/ 00:09:56.955 00:09:56.955 00:09:56.955 Suite: memory 00:09:56.955 Test: test ... 00:09:56.955 register 0x200000200000 2097152 00:09:56.955 malloc 3145728 00:09:56.955 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:56.955 register 0x200000400000 4194304 00:09:56.955 buf 0x2000004fffc0 len 3145728 PASSED 00:09:56.955 malloc 64 00:09:56.955 buf 0x2000004ffec0 len 64 PASSED 00:09:56.955 malloc 4194304 00:09:56.955 register 0x200000800000 6291456 00:09:56.955 buf 0x2000009fffc0 len 4194304 PASSED 00:09:56.955 free 0x2000004fffc0 3145728 00:09:56.955 free 0x2000004ffec0 64 00:09:56.956 unregister 0x200000400000 4194304 PASSED 00:09:56.956 free 0x2000009fffc0 4194304 00:09:56.956 unregister 0x200000800000 6291456 PASSED 00:09:56.956 malloc 8388608 00:09:57.213 register 0x200000400000 10485760 00:09:57.213 buf 0x2000005fffc0 len 8388608 PASSED 00:09:57.213 free 0x2000005fffc0 8388608 00:09:57.213 unregister 0x200000400000 10485760 PASSED 00:09:57.213 passed 00:09:57.213 00:09:57.213 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.213 suites 1 1 n/a 0 0 00:09:57.213 tests 1 1 1 0 0 00:09:57.213 asserts 15 15 15 0 n/a 00:09:57.213 00:09:57.213 Elapsed time = 0.110 seconds 00:09:57.213 00:09:57.213 real 0m0.340s 00:09:57.213 user 0m0.150s 00:09:57.213 sys 0m0.086s 00:09:57.213 15:21:42 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.213 15:21:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:57.213 ************************************ 00:09:57.213 END TEST env_mem_callbacks 00:09:57.213 ************************************ 00:09:57.213 00:09:57.213 real 0m10.404s 00:09:57.213 user 0m8.483s 00:09:57.213 sys 0m1.553s 00:09:57.214 15:21:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.214 15:21:43 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.214 ************************************ 00:09:57.214 END TEST env 00:09:57.214 ************************************ 00:09:57.214 15:21:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:57.214 15:21:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.214 15:21:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.214 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:57.214 ************************************ 00:09:57.214 START TEST rpc 00:09:57.214 ************************************ 00:09:57.214 15:21:43 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:57.472 * Looking for test storage... 00:09:57.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.472 15:21:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.472 15:21:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.472 15:21:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.472 15:21:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.472 15:21:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.472 15:21:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:57.472 15:21:43 rpc -- scripts/common.sh@345 -- # : 1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.472 15:21:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.472 15:21:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@353 -- # local d=1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.472 15:21:43 rpc -- scripts/common.sh@355 -- # echo 1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.472 15:21:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@353 -- # local d=2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.472 15:21:43 rpc -- scripts/common.sh@355 -- # echo 2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.472 15:21:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.472 15:21:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.472 15:21:43 rpc -- scripts/common.sh@368 -- # return 0 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.472 --rc genhtml_branch_coverage=1 00:09:57.472 --rc genhtml_function_coverage=1 00:09:57.472 --rc genhtml_legend=1 00:09:57.472 --rc geninfo_all_blocks=1 00:09:57.472 --rc geninfo_unexecuted_blocks=1 00:09:57.472 00:09:57.472 ' 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.472 --rc genhtml_branch_coverage=1 00:09:57.472 --rc genhtml_function_coverage=1 00:09:57.472 --rc genhtml_legend=1 00:09:57.472 --rc geninfo_all_blocks=1 00:09:57.472 --rc geninfo_unexecuted_blocks=1 00:09:57.472 00:09:57.472 ' 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.472 --rc genhtml_branch_coverage=1 00:09:57.472 --rc genhtml_function_coverage=1 00:09:57.472 --rc genhtml_legend=1 00:09:57.472 --rc geninfo_all_blocks=1 00:09:57.472 --rc geninfo_unexecuted_blocks=1 00:09:57.472 00:09:57.472 ' 00:09:57.472 15:21:43 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.473 --rc genhtml_branch_coverage=1 00:09:57.473 --rc genhtml_function_coverage=1 00:09:57.473 --rc genhtml_legend=1 00:09:57.473 --rc geninfo_all_blocks=1 00:09:57.473 --rc geninfo_unexecuted_blocks=1 00:09:57.473 00:09:57.473 ' 00:09:57.473 15:21:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57920 00:09:57.473 15:21:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.473 15:21:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:57.473 15:21:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57920 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 57920 ']' 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.473 15:21:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 [2024-11-20 15:21:43.466734] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:09:57.731 [2024-11-20 15:21:43.467152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:09:57.731 [2024-11-20 15:21:43.673230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.990 [2024-11-20 15:21:43.840311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:57.990 [2024-11-20 15:21:43.840613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57920' to capture a snapshot of events at runtime. 00:09:57.990 [2024-11-20 15:21:43.840783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.990 [2024-11-20 15:21:43.841053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.990 [2024-11-20 15:21:43.841106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57920 for offline analysis/debug. 00:09:57.990 [2024-11-20 15:21:43.843319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.925 15:21:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.925 15:21:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:58.925 15:21:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:58.925 15:21:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:58.925 15:21:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:58.925 15:21:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:58.925 15:21:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.925 15:21:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.925 15:21:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.925 ************************************ 00:09:58.925 START TEST rpc_integrity 00:09:58.925 ************************************ 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:58.925 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.925 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.184 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.184 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:59.184 { 00:09:59.184 "name": "Malloc0", 00:09:59.184 "aliases": [ 00:09:59.184 "54b9b910-84fe-4e68-ac7a-ea2f9b15ec4a" 00:09:59.184 ], 00:09:59.184 "product_name": "Malloc disk", 00:09:59.184 "block_size": 512, 00:09:59.184 "num_blocks": 16384, 00:09:59.184 "uuid": "54b9b910-84fe-4e68-ac7a-ea2f9b15ec4a", 00:09:59.184 "assigned_rate_limits": { 00:09:59.184 "rw_ios_per_sec": 0, 00:09:59.184 "rw_mbytes_per_sec": 0, 00:09:59.184 "r_mbytes_per_sec": 0, 00:09:59.184 "w_mbytes_per_sec": 0 00:09:59.184 }, 00:09:59.184 "claimed": false, 00:09:59.184 "zoned": false, 00:09:59.184 "supported_io_types": { 00:09:59.184 "read": true, 00:09:59.184 "write": true, 00:09:59.184 "unmap": true, 00:09:59.184 "flush": true, 00:09:59.184 "reset": true, 00:09:59.184 "nvme_admin": false, 00:09:59.184 "nvme_io": false, 00:09:59.184 "nvme_io_md": false, 00:09:59.184 "write_zeroes": true, 00:09:59.184 "zcopy": true, 00:09:59.184 "get_zone_info": false, 00:09:59.184 "zone_management": false, 00:09:59.184 "zone_append": false, 00:09:59.184 "compare": false, 00:09:59.184 "compare_and_write": false, 00:09:59.184 "abort": true, 00:09:59.184 "seek_hole": false, 00:09:59.184 "seek_data": false, 00:09:59.184 "copy": true, 00:09:59.184 "nvme_iov_md": false 00:09:59.184 }, 00:09:59.184 "memory_domains": [ 00:09:59.184 { 00:09:59.184 "dma_device_id": "system", 00:09:59.184 "dma_device_type": 1 00:09:59.184 }, 00:09:59.184 { 00:09:59.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.184 "dma_device_type": 2 00:09:59.184 } 00:09:59.184 ], 00:09:59.184 "driver_specific": {} 00:09:59.184 } 00:09:59.184 ]' 00:09:59.184 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:59.184 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:59.184 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:59.184 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.184 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.184 [2024-11-20 15:21:44.927742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:59.184 [2024-11-20 15:21:44.927816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.184 [2024-11-20 15:21:44.927860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:59.184 [2024-11-20 15:21:44.927887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.184 [2024-11-20 15:21:44.930497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.185 [2024-11-20 15:21:44.930548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:59.185 Passthru0 00:09:59.185 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:59.185 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 15:21:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:59.185 { 00:09:59.185 "name": "Malloc0", 00:09:59.185 "aliases": [ 00:09:59.185 "54b9b910-84fe-4e68-ac7a-ea2f9b15ec4a" 00:09:59.185 ], 00:09:59.185 "product_name": "Malloc disk", 00:09:59.185 "block_size": 512, 00:09:59.185 "num_blocks": 16384, 00:09:59.185 "uuid": "54b9b910-84fe-4e68-ac7a-ea2f9b15ec4a", 00:09:59.185 "assigned_rate_limits": { 00:09:59.185 "rw_ios_per_sec": 0, 00:09:59.185 "rw_mbytes_per_sec": 0, 00:09:59.185 "r_mbytes_per_sec": 0, 00:09:59.185 "w_mbytes_per_sec": 0 00:09:59.185 }, 00:09:59.185 "claimed": true, 00:09:59.185 "claim_type": "exclusive_write", 00:09:59.185 "zoned": false, 00:09:59.185 "supported_io_types": { 00:09:59.185 "read": true, 00:09:59.185 "write": true, 00:09:59.185 "unmap": true, 00:09:59.185 "flush": true, 00:09:59.185 "reset": true, 00:09:59.185 "nvme_admin": false, 00:09:59.185 "nvme_io": false, 00:09:59.185 "nvme_io_md": false, 00:09:59.185 "write_zeroes": true, 00:09:59.185 "zcopy": true, 00:09:59.185 "get_zone_info": false, 00:09:59.185 "zone_management": false, 00:09:59.185 "zone_append": false, 00:09:59.185 "compare": false, 00:09:59.185 "compare_and_write": false, 00:09:59.185 "abort": true, 00:09:59.185 "seek_hole": false, 00:09:59.185 "seek_data": false, 00:09:59.185 "copy": true, 00:09:59.185 "nvme_iov_md": false 00:09:59.185 }, 00:09:59.185 "memory_domains": [ 00:09:59.185 { 00:09:59.185 "dma_device_id": "system", 00:09:59.185 "dma_device_type": 1 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.185 "dma_device_type": 2 00:09:59.185 } 00:09:59.185 ], 00:09:59.185 "driver_specific": {} 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "name": "Passthru0", 00:09:59.185 "aliases": [ 00:09:59.185 "1a47429a-092a-5717-a0f2-3eddf7f90580" 00:09:59.185 ], 00:09:59.185 "product_name": "passthru", 00:09:59.185 "block_size": 512, 00:09:59.185 "num_blocks": 16384, 00:09:59.185 "uuid": "1a47429a-092a-5717-a0f2-3eddf7f90580", 00:09:59.185 "assigned_rate_limits": { 00:09:59.185 "rw_ios_per_sec": 0, 00:09:59.185 "rw_mbytes_per_sec": 0, 00:09:59.185 "r_mbytes_per_sec": 0, 00:09:59.185 "w_mbytes_per_sec": 0 00:09:59.185 }, 00:09:59.185 "claimed": false, 00:09:59.185 "zoned": false, 00:09:59.185 "supported_io_types": { 00:09:59.185 "read": true, 00:09:59.185 "write": true, 00:09:59.185 "unmap": true, 00:09:59.185 "flush": true, 00:09:59.185 "reset": true, 00:09:59.185 "nvme_admin": false, 00:09:59.185 "nvme_io": false, 00:09:59.185 "nvme_io_md": false, 00:09:59.185 "write_zeroes": true, 00:09:59.185 "zcopy": true, 00:09:59.185 "get_zone_info": false, 00:09:59.185 "zone_management": false, 00:09:59.185 "zone_append": false, 00:09:59.185 "compare": false, 00:09:59.185 "compare_and_write": false, 00:09:59.185 "abort": true, 00:09:59.185 "seek_hole": false, 00:09:59.185 "seek_data": false, 00:09:59.185 "copy": true, 00:09:59.185 "nvme_iov_md": false 00:09:59.185 }, 00:09:59.185 "memory_domains": [ 00:09:59.185 { 00:09:59.185 "dma_device_id": "system", 00:09:59.185 "dma_device_type": 1 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.185 "dma_device_type": 2 00:09:59.185 } 00:09:59.185 ], 00:09:59.185 "driver_specific": { 00:09:59.185 "passthru": { 00:09:59.185 "name": "Passthru0", 00:09:59.185 "base_bdev_name": "Malloc0" 00:09:59.185 } 00:09:59.185 } 00:09:59.185 } 00:09:59.185 ]' 00:09:59.185 15:21:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:59.185 ************************************ 00:09:59.185 END TEST rpc_integrity 00:09:59.185 ************************************ 00:09:59.185 15:21:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:59.185 00:09:59.185 real 0m0.325s 00:09:59.185 user 0m0.172s 00:09:59.185 sys 0m0.056s 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.185 15:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.443 15:21:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:59.443 15:21:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.443 15:21:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.443 15:21:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.443 ************************************ 00:09:59.443 START TEST rpc_plugins 00:09:59.443 ************************************ 00:09:59.443 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:59.443 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:59.443 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.443 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:59.443 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:59.444 { 00:09:59.444 "name": "Malloc1", 00:09:59.444 "aliases": [ 00:09:59.444 "93dbc6b4-4ce3-4b72-b60b-d524a72db943" 00:09:59.444 ], 00:09:59.444 "product_name": "Malloc disk", 00:09:59.444 "block_size": 4096, 00:09:59.444 "num_blocks": 256, 00:09:59.444 "uuid": "93dbc6b4-4ce3-4b72-b60b-d524a72db943", 00:09:59.444 "assigned_rate_limits": { 00:09:59.444 "rw_ios_per_sec": 0, 00:09:59.444 "rw_mbytes_per_sec": 0, 00:09:59.444 "r_mbytes_per_sec": 0, 00:09:59.444 "w_mbytes_per_sec": 0 00:09:59.444 }, 00:09:59.444 "claimed": false, 00:09:59.444 "zoned": false, 00:09:59.444 "supported_io_types": { 00:09:59.444 "read": true, 00:09:59.444 "write": true, 00:09:59.444 "unmap": true, 00:09:59.444 "flush": true, 00:09:59.444 "reset": true, 00:09:59.444 "nvme_admin": false, 00:09:59.444 "nvme_io": false, 00:09:59.444 "nvme_io_md": false, 00:09:59.444 "write_zeroes": true, 00:09:59.444 "zcopy": true, 00:09:59.444 "get_zone_info": false, 00:09:59.444 "zone_management": false, 00:09:59.444 "zone_append": false, 00:09:59.444 "compare": false, 00:09:59.444 "compare_and_write": false, 00:09:59.444 "abort": true, 00:09:59.444 "seek_hole": false, 00:09:59.444 "seek_data": false, 00:09:59.444 "copy": true, 00:09:59.444 "nvme_iov_md": false 00:09:59.444 }, 00:09:59.444 "memory_domains": [ 00:09:59.444 { 00:09:59.444 "dma_device_id": "system", 00:09:59.444 "dma_device_type": 1 00:09:59.444 }, 00:09:59.444 { 00:09:59.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.444 "dma_device_type": 2 00:09:59.444 } 00:09:59.444 ], 00:09:59.444 "driver_specific": {} 00:09:59.444 } 00:09:59.444 ]' 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:59.444 ************************************ 00:09:59.444 END TEST rpc_plugins 00:09:59.444 ************************************ 00:09:59.444 15:21:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:59.444 00:09:59.444 real 0m0.149s 00:09:59.444 user 0m0.081s 00:09:59.444 sys 0m0.024s 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.444 15:21:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 15:21:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:59.444 15:21:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.444 15:21:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.444 15:21:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 ************************************ 00:09:59.444 START TEST rpc_trace_cmd_test 00:09:59.444 ************************************ 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.444 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:59.702 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57920", 00:09:59.702 "tpoint_group_mask": "0x8", 00:09:59.702 "iscsi_conn": { 00:09:59.702 "mask": "0x2", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "scsi": { 00:09:59.702 "mask": "0x4", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "bdev": { 00:09:59.702 "mask": "0x8", 00:09:59.702 "tpoint_mask": "0xffffffffffffffff" 00:09:59.702 }, 00:09:59.702 "nvmf_rdma": { 00:09:59.702 "mask": "0x10", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "nvmf_tcp": { 00:09:59.702 "mask": "0x20", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "ftl": { 00:09:59.702 "mask": "0x40", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "blobfs": { 00:09:59.702 "mask": "0x80", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "dsa": { 00:09:59.702 "mask": "0x200", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "thread": { 00:09:59.702 "mask": "0x400", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "nvme_pcie": { 00:09:59.702 "mask": "0x800", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "iaa": { 00:09:59.702 "mask": "0x1000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "nvme_tcp": { 00:09:59.702 "mask": "0x2000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "bdev_nvme": { 00:09:59.702 "mask": "0x4000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "sock": { 00:09:59.702 "mask": "0x8000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "blob": { 00:09:59.702 "mask": "0x10000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "bdev_raid": { 00:09:59.702 "mask": "0x20000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 }, 00:09:59.702 "scheduler": { 00:09:59.702 "mask": "0x40000", 00:09:59.702 "tpoint_mask": "0x0" 00:09:59.702 } 00:09:59.702 }' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:59.702 ************************************ 00:09:59.702 END TEST rpc_trace_cmd_test 00:09:59.702 ************************************ 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:59.702 00:09:59.702 real 0m0.248s 00:09:59.702 user 0m0.204s 00:09:59.702 sys 0m0.032s 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.702 15:21:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 15:21:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:59.961 15:21:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:59.961 15:21:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:59.961 15:21:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.961 15:21:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.961 15:21:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 ************************************ 00:09:59.961 START TEST rpc_daemon_integrity 00:09:59.961 ************************************ 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:59.961 { 00:09:59.961 "name": "Malloc2", 00:09:59.961 "aliases": [ 00:09:59.961 "6628c52e-534f-47c5-bcab-d01d4c52095d" 00:09:59.961 ], 00:09:59.961 "product_name": "Malloc disk", 00:09:59.961 "block_size": 512, 00:09:59.961 "num_blocks": 16384, 00:09:59.961 "uuid": "6628c52e-534f-47c5-bcab-d01d4c52095d", 00:09:59.961 "assigned_rate_limits": { 00:09:59.961 "rw_ios_per_sec": 0, 00:09:59.961 "rw_mbytes_per_sec": 0, 00:09:59.961 "r_mbytes_per_sec": 0, 00:09:59.961 "w_mbytes_per_sec": 0 00:09:59.961 }, 00:09:59.961 "claimed": false, 00:09:59.961 "zoned": false, 00:09:59.961 "supported_io_types": { 00:09:59.961 "read": true, 00:09:59.961 "write": true, 00:09:59.961 "unmap": true, 00:09:59.961 "flush": true, 00:09:59.961 "reset": true, 00:09:59.961 "nvme_admin": false, 00:09:59.961 "nvme_io": false, 00:09:59.961 "nvme_io_md": false, 00:09:59.961 "write_zeroes": true, 00:09:59.961 "zcopy": true, 00:09:59.961 "get_zone_info": false, 00:09:59.961 "zone_management": false, 00:09:59.961 "zone_append": false, 00:09:59.961 "compare": false, 00:09:59.961 "compare_and_write": false, 00:09:59.961 "abort": true, 00:09:59.961 "seek_hole": false, 00:09:59.961 "seek_data": false, 00:09:59.961 "copy": true, 00:09:59.961 "nvme_iov_md": false 00:09:59.961 }, 00:09:59.961 "memory_domains": [ 00:09:59.961 { 00:09:59.961 "dma_device_id": "system", 00:09:59.961 "dma_device_type": 1 00:09:59.961 }, 00:09:59.961 { 00:09:59.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.961 "dma_device_type": 2 00:09:59.961 } 00:09:59.961 ], 00:09:59.961 "driver_specific": {} 00:09:59.961 } 00:09:59.961 ]' 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.961 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.961 [2024-11-20 15:21:45.843006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:59.961 [2024-11-20 15:21:45.843081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.961 [2024-11-20 15:21:45.843111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:59.961 [2024-11-20 15:21:45.843128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.961 [2024-11-20 15:21:45.845912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.961 [2024-11-20 15:21:45.845960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:59.962 Passthru0 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:59.962 { 00:09:59.962 "name": "Malloc2", 00:09:59.962 "aliases": [ 00:09:59.962 "6628c52e-534f-47c5-bcab-d01d4c52095d" 00:09:59.962 ], 00:09:59.962 "product_name": "Malloc disk", 00:09:59.962 "block_size": 512, 00:09:59.962 "num_blocks": 16384, 00:09:59.962 "uuid": "6628c52e-534f-47c5-bcab-d01d4c52095d", 00:09:59.962 "assigned_rate_limits": { 00:09:59.962 "rw_ios_per_sec": 0, 00:09:59.962 "rw_mbytes_per_sec": 0, 00:09:59.962 "r_mbytes_per_sec": 0, 00:09:59.962 "w_mbytes_per_sec": 0 00:09:59.962 }, 00:09:59.962 "claimed": true, 00:09:59.962 "claim_type": "exclusive_write", 00:09:59.962 "zoned": false, 00:09:59.962 "supported_io_types": { 00:09:59.962 "read": true, 00:09:59.962 "write": true, 00:09:59.962 "unmap": true, 00:09:59.962 "flush": true, 00:09:59.962 "reset": true, 00:09:59.962 "nvme_admin": false, 00:09:59.962 "nvme_io": false, 00:09:59.962 "nvme_io_md": false, 00:09:59.962 "write_zeroes": true, 00:09:59.962 "zcopy": true, 00:09:59.962 "get_zone_info": false, 00:09:59.962 "zone_management": false, 00:09:59.962 "zone_append": false, 00:09:59.962 "compare": false, 00:09:59.962 "compare_and_write": false, 00:09:59.962 "abort": true, 00:09:59.962 "seek_hole": false, 00:09:59.962 "seek_data": false, 00:09:59.962 "copy": true, 00:09:59.962 "nvme_iov_md": false 00:09:59.962 }, 00:09:59.962 "memory_domains": [ 00:09:59.962 { 00:09:59.962 "dma_device_id": "system", 00:09:59.962 "dma_device_type": 1 00:09:59.962 }, 00:09:59.962 { 00:09:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.962 "dma_device_type": 2 00:09:59.962 } 00:09:59.962 ], 00:09:59.962 "driver_specific": {} 00:09:59.962 }, 00:09:59.962 { 00:09:59.962 "name": "Passthru0", 00:09:59.962 "aliases": [ 00:09:59.962 "36b03c97-d7b1-5a65-a74c-06ba8242ceaf" 00:09:59.962 ], 00:09:59.962 "product_name": "passthru", 00:09:59.962 "block_size": 512, 00:09:59.962 "num_blocks": 16384, 00:09:59.962 "uuid": "36b03c97-d7b1-5a65-a74c-06ba8242ceaf", 00:09:59.962 "assigned_rate_limits": { 00:09:59.962 "rw_ios_per_sec": 0, 00:09:59.962 "rw_mbytes_per_sec": 0, 00:09:59.962 "r_mbytes_per_sec": 0, 00:09:59.962 "w_mbytes_per_sec": 0 00:09:59.962 }, 00:09:59.962 "claimed": false, 00:09:59.962 "zoned": false, 00:09:59.962 "supported_io_types": { 00:09:59.962 "read": true, 00:09:59.962 "write": true, 00:09:59.962 "unmap": true, 00:09:59.962 "flush": true, 00:09:59.962 "reset": true, 00:09:59.962 "nvme_admin": false, 00:09:59.962 "nvme_io": false, 00:09:59.962 "nvme_io_md": false, 00:09:59.962 "write_zeroes": true, 00:09:59.962 "zcopy": true, 00:09:59.962 "get_zone_info": false, 00:09:59.962 "zone_management": false, 00:09:59.962 "zone_append": false, 00:09:59.962 "compare": false, 00:09:59.962 "compare_and_write": false, 00:09:59.962 "abort": true, 00:09:59.962 "seek_hole": false, 00:09:59.962 "seek_data": false, 00:09:59.962 "copy": true, 00:09:59.962 "nvme_iov_md": false 00:09:59.962 }, 00:09:59.962 "memory_domains": [ 00:09:59.962 { 00:09:59.962 "dma_device_id": "system", 00:09:59.962 "dma_device_type": 1 00:09:59.962 }, 00:09:59.962 { 00:09:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.962 "dma_device_type": 2 00:09:59.962 } 00:09:59.962 ], 00:09:59.962 "driver_specific": { 00:09:59.962 "passthru": { 00:09:59.962 "name": "Passthru0", 00:09:59.962 "base_bdev_name": "Malloc2" 00:09:59.962 } 00:09:59.962 } 00:09:59.962 } 00:09:59.962 ]' 00:09:59.962 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:00.222 15:21:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:00.222 15:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:00.222 00:10:00.222 real 0m0.344s 00:10:00.222 user 0m0.190s 00:10:00.222 sys 0m0.054s 00:10:00.222 15:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.222 15:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:00.222 ************************************ 00:10:00.222 END TEST rpc_daemon_integrity 00:10:00.222 ************************************ 00:10:00.222 15:21:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:00.222 15:21:46 rpc -- rpc/rpc.sh@84 -- # killprocess 57920 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 57920 ']' 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@958 -- # kill -0 57920 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@959 -- # uname 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57920 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57920' 00:10:00.222 killing process with pid 57920 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@973 -- # kill 57920 00:10:00.222 15:21:46 rpc -- common/autotest_common.sh@978 -- # wait 57920 00:10:02.816 00:10:02.816 real 0m5.577s 00:10:02.816 user 0m6.159s 00:10:02.816 sys 0m0.970s 00:10:02.816 15:21:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.816 15:21:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.816 ************************************ 00:10:02.816 END TEST rpc 00:10:02.816 ************************************ 00:10:02.816 15:21:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:02.816 15:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.816 15:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.816 15:21:48 -- common/autotest_common.sh@10 -- # set +x 00:10:02.816 ************************************ 00:10:02.816 START TEST skip_rpc 00:10:02.816 ************************************ 00:10:02.816 15:21:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:03.075 * Looking for test storage... 00:10:03.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.075 15:21:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.075 --rc genhtml_branch_coverage=1 00:10:03.075 --rc genhtml_function_coverage=1 00:10:03.075 --rc genhtml_legend=1 00:10:03.075 --rc geninfo_all_blocks=1 00:10:03.075 --rc geninfo_unexecuted_blocks=1 00:10:03.075 00:10:03.075 ' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.075 --rc genhtml_branch_coverage=1 00:10:03.075 --rc genhtml_function_coverage=1 00:10:03.075 --rc genhtml_legend=1 00:10:03.075 --rc geninfo_all_blocks=1 00:10:03.075 --rc geninfo_unexecuted_blocks=1 00:10:03.075 00:10:03.075 ' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.075 --rc genhtml_branch_coverage=1 00:10:03.075 --rc genhtml_function_coverage=1 00:10:03.075 --rc genhtml_legend=1 00:10:03.075 --rc geninfo_all_blocks=1 00:10:03.075 --rc geninfo_unexecuted_blocks=1 00:10:03.075 00:10:03.075 ' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.075 --rc genhtml_branch_coverage=1 00:10:03.075 --rc genhtml_function_coverage=1 00:10:03.075 --rc genhtml_legend=1 00:10:03.075 --rc geninfo_all_blocks=1 00:10:03.075 --rc geninfo_unexecuted_blocks=1 00:10:03.075 00:10:03.075 ' 00:10:03.075 15:21:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:03.075 15:21:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:03.075 15:21:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.075 15:21:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.075 ************************************ 00:10:03.075 START TEST skip_rpc 00:10:03.075 ************************************ 00:10:03.075 15:21:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:03.075 15:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58159 00:10:03.075 15:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:03.075 15:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:03.075 15:21:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:03.334 [2024-11-20 15:21:49.115673] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:03.334 [2024-11-20 15:21:49.116081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:10:03.593 [2024-11-20 15:21:49.313417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.593 [2024-11-20 15:21:49.431476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58159 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58159 ']' 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58159 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.868 15:21:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58159 00:10:08.868 killing process with pid 58159 00:10:08.868 15:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.868 15:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.868 15:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58159' 00:10:08.868 15:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58159 00:10:08.868 15:21:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58159 00:10:10.773 00:10:10.773 real 0m7.569s 00:10:10.773 user 0m7.033s 00:10:10.773 sys 0m0.441s 00:10:10.773 ************************************ 00:10:10.773 END TEST skip_rpc 00:10:10.773 ************************************ 00:10:10.773 15:21:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.773 15:21:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.773 15:21:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:10.773 15:21:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.773 15:21:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.773 15:21:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.773 ************************************ 00:10:10.773 START TEST skip_rpc_with_json 00:10:10.773 ************************************ 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58264 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58264 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.773 15:21:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:11.032 [2024-11-20 15:21:56.736136] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:11.032 [2024-11-20 15:21:56.736594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:10:11.032 [2024-11-20 15:21:56.928595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.291 [2024-11-20 15:21:57.048521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.227 [2024-11-20 15:21:57.975690] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:12.227 request: 00:10:12.227 { 00:10:12.227 "trtype": "tcp", 00:10:12.227 "method": "nvmf_get_transports", 00:10:12.227 "req_id": 1 00:10:12.227 } 00:10:12.227 Got JSON-RPC error response 00:10:12.227 response: 00:10:12.227 { 00:10:12.227 "code": -19, 00:10:12.227 "message": "No such device" 00:10:12.227 } 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.227 [2024-11-20 15:21:57.987837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.227 15:21:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:12.227 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.228 15:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:12.486 { 00:10:12.486 "subsystems": [ 00:10:12.486 { 00:10:12.486 "subsystem": "fsdev", 00:10:12.486 "config": [ 00:10:12.486 { 00:10:12.487 "method": "fsdev_set_opts", 00:10:12.487 "params": { 00:10:12.487 "fsdev_io_pool_size": 65535, 00:10:12.487 "fsdev_io_cache_size": 256 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "keyring", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "iobuf", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "iobuf_set_options", 00:10:12.487 "params": { 00:10:12.487 "small_pool_count": 8192, 00:10:12.487 "large_pool_count": 1024, 00:10:12.487 "small_bufsize": 8192, 00:10:12.487 "large_bufsize": 135168, 00:10:12.487 "enable_numa": false 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "sock", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "sock_set_default_impl", 00:10:12.487 "params": { 00:10:12.487 "impl_name": "posix" 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "sock_impl_set_options", 00:10:12.487 "params": { 00:10:12.487 "impl_name": "ssl", 00:10:12.487 "recv_buf_size": 4096, 00:10:12.487 "send_buf_size": 4096, 00:10:12.487 "enable_recv_pipe": true, 00:10:12.487 "enable_quickack": false, 00:10:12.487 "enable_placement_id": 0, 00:10:12.487 "enable_zerocopy_send_server": true, 00:10:12.487 "enable_zerocopy_send_client": false, 00:10:12.487 "zerocopy_threshold": 0, 00:10:12.487 "tls_version": 0, 00:10:12.487 "enable_ktls": false 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "sock_impl_set_options", 00:10:12.487 "params": { 00:10:12.487 "impl_name": "posix", 00:10:12.487 "recv_buf_size": 2097152, 00:10:12.487 "send_buf_size": 2097152, 00:10:12.487 "enable_recv_pipe": true, 00:10:12.487 "enable_quickack": false, 00:10:12.487 "enable_placement_id": 0, 00:10:12.487 "enable_zerocopy_send_server": true, 00:10:12.487 "enable_zerocopy_send_client": false, 00:10:12.487 "zerocopy_threshold": 0, 00:10:12.487 "tls_version": 0, 00:10:12.487 "enable_ktls": false 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "vmd", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "accel", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "accel_set_options", 00:10:12.487 "params": { 00:10:12.487 "small_cache_size": 128, 00:10:12.487 "large_cache_size": 16, 00:10:12.487 "task_count": 2048, 00:10:12.487 "sequence_count": 2048, 00:10:12.487 "buf_count": 2048 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "bdev", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "bdev_set_options", 00:10:12.487 "params": { 00:10:12.487 "bdev_io_pool_size": 65535, 00:10:12.487 "bdev_io_cache_size": 256, 00:10:12.487 "bdev_auto_examine": true, 00:10:12.487 "iobuf_small_cache_size": 128, 00:10:12.487 "iobuf_large_cache_size": 16 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "bdev_raid_set_options", 00:10:12.487 "params": { 00:10:12.487 "process_window_size_kb": 1024, 00:10:12.487 "process_max_bandwidth_mb_sec": 0 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "bdev_iscsi_set_options", 00:10:12.487 "params": { 00:10:12.487 "timeout_sec": 30 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "bdev_nvme_set_options", 00:10:12.487 "params": { 00:10:12.487 "action_on_timeout": "none", 00:10:12.487 "timeout_us": 0, 00:10:12.487 "timeout_admin_us": 0, 00:10:12.487 "keep_alive_timeout_ms": 10000, 00:10:12.487 "arbitration_burst": 0, 00:10:12.487 "low_priority_weight": 0, 00:10:12.487 "medium_priority_weight": 0, 00:10:12.487 "high_priority_weight": 0, 00:10:12.487 "nvme_adminq_poll_period_us": 10000, 00:10:12.487 "nvme_ioq_poll_period_us": 0, 00:10:12.487 "io_queue_requests": 0, 00:10:12.487 "delay_cmd_submit": true, 00:10:12.487 "transport_retry_count": 4, 00:10:12.487 "bdev_retry_count": 3, 00:10:12.487 "transport_ack_timeout": 0, 00:10:12.487 "ctrlr_loss_timeout_sec": 0, 00:10:12.487 "reconnect_delay_sec": 0, 00:10:12.487 "fast_io_fail_timeout_sec": 0, 00:10:12.487 "disable_auto_failback": false, 00:10:12.487 "generate_uuids": false, 00:10:12.487 "transport_tos": 0, 00:10:12.487 "nvme_error_stat": false, 00:10:12.487 "rdma_srq_size": 0, 00:10:12.487 "io_path_stat": false, 00:10:12.487 "allow_accel_sequence": false, 00:10:12.487 "rdma_max_cq_size": 0, 00:10:12.487 "rdma_cm_event_timeout_ms": 0, 00:10:12.487 "dhchap_digests": [ 00:10:12.487 "sha256", 00:10:12.487 "sha384", 00:10:12.487 "sha512" 00:10:12.487 ], 00:10:12.487 "dhchap_dhgroups": [ 00:10:12.487 "null", 00:10:12.487 "ffdhe2048", 00:10:12.487 "ffdhe3072", 00:10:12.487 "ffdhe4096", 00:10:12.487 "ffdhe6144", 00:10:12.487 "ffdhe8192" 00:10:12.487 ] 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "bdev_nvme_set_hotplug", 00:10:12.487 "params": { 00:10:12.487 "period_us": 100000, 00:10:12.487 "enable": false 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "bdev_wait_for_examine" 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "scsi", 00:10:12.487 "config": null 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "scheduler", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "framework_set_scheduler", 00:10:12.487 "params": { 00:10:12.487 "name": "static" 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "vhost_scsi", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "vhost_blk", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "ublk", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "nbd", 00:10:12.487 "config": [] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "nvmf", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "nvmf_set_config", 00:10:12.487 "params": { 00:10:12.487 "discovery_filter": "match_any", 00:10:12.487 "admin_cmd_passthru": { 00:10:12.487 "identify_ctrlr": false 00:10:12.487 }, 00:10:12.487 "dhchap_digests": [ 00:10:12.487 "sha256", 00:10:12.487 "sha384", 00:10:12.487 "sha512" 00:10:12.487 ], 00:10:12.487 "dhchap_dhgroups": [ 00:10:12.487 "null", 00:10:12.487 "ffdhe2048", 00:10:12.487 "ffdhe3072", 00:10:12.487 "ffdhe4096", 00:10:12.487 "ffdhe6144", 00:10:12.487 "ffdhe8192" 00:10:12.487 ] 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "nvmf_set_max_subsystems", 00:10:12.487 "params": { 00:10:12.487 "max_subsystems": 1024 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "nvmf_set_crdt", 00:10:12.487 "params": { 00:10:12.487 "crdt1": 0, 00:10:12.487 "crdt2": 0, 00:10:12.487 "crdt3": 0 00:10:12.487 } 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "method": "nvmf_create_transport", 00:10:12.487 "params": { 00:10:12.487 "trtype": "TCP", 00:10:12.487 "max_queue_depth": 128, 00:10:12.487 "max_io_qpairs_per_ctrlr": 127, 00:10:12.487 "in_capsule_data_size": 4096, 00:10:12.487 "max_io_size": 131072, 00:10:12.487 "io_unit_size": 131072, 00:10:12.487 "max_aq_depth": 128, 00:10:12.487 "num_shared_buffers": 511, 00:10:12.487 "buf_cache_size": 4294967295, 00:10:12.487 "dif_insert_or_strip": false, 00:10:12.487 "zcopy": false, 00:10:12.487 "c2h_success": true, 00:10:12.487 "sock_priority": 0, 00:10:12.487 "abort_timeout_sec": 1, 00:10:12.487 "ack_timeout": 0, 00:10:12.487 "data_wr_pool_size": 0 00:10:12.487 } 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "subsystem": "iscsi", 00:10:12.487 "config": [ 00:10:12.487 { 00:10:12.487 "method": "iscsi_set_options", 00:10:12.487 "params": { 00:10:12.487 "node_base": "iqn.2016-06.io.spdk", 00:10:12.487 "max_sessions": 128, 00:10:12.487 "max_connections_per_session": 2, 00:10:12.487 "max_queue_depth": 64, 00:10:12.487 "default_time2wait": 2, 00:10:12.487 "default_time2retain": 20, 00:10:12.487 "first_burst_length": 8192, 00:10:12.487 "immediate_data": true, 00:10:12.487 "allow_duplicated_isid": false, 00:10:12.487 "error_recovery_level": 0, 00:10:12.487 "nop_timeout": 60, 00:10:12.487 "nop_in_interval": 30, 00:10:12.487 "disable_chap": false, 00:10:12.487 "require_chap": false, 00:10:12.487 "mutual_chap": false, 00:10:12.487 "chap_group": 0, 00:10:12.487 "max_large_datain_per_connection": 64, 00:10:12.487 "max_r2t_per_connection": 4, 00:10:12.487 "pdu_pool_size": 36864, 00:10:12.488 "immediate_data_pool_size": 16384, 00:10:12.488 "data_out_pool_size": 2048 00:10:12.488 } 00:10:12.488 } 00:10:12.488 ] 00:10:12.488 } 00:10:12.488 ] 00:10:12.488 } 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58264 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58264 ']' 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58264 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58264 00:10:12.488 killing process with pid 58264 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58264' 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58264 00:10:12.488 15:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58264 00:10:15.772 15:22:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58326 00:10:15.772 15:22:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:15.772 15:22:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58326 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58326 ']' 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58326 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58326 00:10:21.098 killing process with pid 58326 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58326' 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58326 00:10:21.098 15:22:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58326 00:10:23.035 15:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:23.035 15:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:23.035 00:10:23.035 real 0m11.960s 00:10:23.035 user 0m11.313s 00:10:23.035 sys 0m1.001s 00:10:23.035 15:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.035 15:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 ************************************ 00:10:23.035 END TEST skip_rpc_with_json 00:10:23.035 ************************************ 00:10:23.035 15:22:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:23.035 15:22:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.035 15:22:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.035 15:22:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 ************************************ 00:10:23.035 START TEST skip_rpc_with_delay 00:10:23.035 ************************************ 00:10:23.035 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:23.036 [2024-11-20 15:22:08.717373] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:23.036 00:10:23.036 real 0m0.177s 00:10:23.036 user 0m0.088s 00:10:23.036 sys 0m0.088s 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.036 15:22:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:23.036 ************************************ 00:10:23.036 END TEST skip_rpc_with_delay 00:10:23.036 ************************************ 00:10:23.036 15:22:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:23.036 15:22:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:23.036 15:22:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:23.036 15:22:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.036 15:22:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.036 15:22:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.036 ************************************ 00:10:23.036 START TEST exit_on_failed_rpc_init 00:10:23.036 ************************************ 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58459 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58459 00:10:23.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58459 ']' 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.036 15:22:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:23.294 [2024-11-20 15:22:09.001864] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:23.294 [2024-11-20 15:22:09.002042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58459 ] 00:10:23.294 [2024-11-20 15:22:09.196059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.552 [2024-11-20 15:22:09.319142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:24.489 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:24.747 [2024-11-20 15:22:10.531601] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:24.747 [2024-11-20 15:22:10.532495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58483 ] 00:10:25.005 [2024-11-20 15:22:10.724669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.005 [2024-11-20 15:22:10.890444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.005 [2024-11-20 15:22:10.890540] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:25.005 [2024-11-20 15:22:10.890558] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:25.005 [2024-11-20 15:22:10.890597] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58459 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58459 ']' 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58459 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58459 00:10:25.264 killing process with pid 58459 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58459' 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58459 00:10:25.264 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58459 00:10:27.800 00:10:27.800 real 0m4.785s 00:10:27.800 user 0m5.238s 00:10:27.800 sys 0m0.700s 00:10:27.800 15:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.800 15:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 ************************************ 00:10:27.800 END TEST exit_on_failed_rpc_init 00:10:27.800 ************************************ 00:10:27.800 15:22:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:27.800 ************************************ 00:10:27.800 END TEST skip_rpc 00:10:27.800 ************************************ 00:10:27.800 00:10:27.800 real 0m24.963s 00:10:27.800 user 0m23.879s 00:10:27.800 sys 0m2.493s 00:10:27.800 15:22:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.800 15:22:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 15:22:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:27.800 15:22:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.800 15:22:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.800 15:22:13 -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 ************************************ 00:10:28.060 START TEST rpc_client 00:10:28.060 ************************************ 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:28.060 * Looking for test storage... 00:10:28.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.060 15:22:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.060 --rc genhtml_branch_coverage=1 00:10:28.060 --rc genhtml_function_coverage=1 00:10:28.060 --rc genhtml_legend=1 00:10:28.060 --rc geninfo_all_blocks=1 00:10:28.060 --rc geninfo_unexecuted_blocks=1 00:10:28.060 00:10:28.060 ' 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.060 --rc genhtml_branch_coverage=1 00:10:28.060 --rc genhtml_function_coverage=1 00:10:28.060 --rc genhtml_legend=1 00:10:28.060 --rc geninfo_all_blocks=1 00:10:28.060 --rc geninfo_unexecuted_blocks=1 00:10:28.060 00:10:28.060 ' 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.060 --rc genhtml_branch_coverage=1 00:10:28.060 --rc genhtml_function_coverage=1 00:10:28.060 --rc genhtml_legend=1 00:10:28.060 --rc geninfo_all_blocks=1 00:10:28.060 --rc geninfo_unexecuted_blocks=1 00:10:28.060 00:10:28.060 ' 00:10:28.060 15:22:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.060 --rc genhtml_branch_coverage=1 00:10:28.060 --rc genhtml_function_coverage=1 00:10:28.060 --rc genhtml_legend=1 00:10:28.060 --rc geninfo_all_blocks=1 00:10:28.060 --rc geninfo_unexecuted_blocks=1 00:10:28.060 00:10:28.060 ' 00:10:28.060 15:22:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:28.060 OK 00:10:28.319 15:22:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:28.319 00:10:28.319 real 0m0.271s 00:10:28.319 user 0m0.136s 00:10:28.319 sys 0m0.146s 00:10:28.319 ************************************ 00:10:28.319 END TEST rpc_client 00:10:28.319 ************************************ 00:10:28.319 15:22:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.319 15:22:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:28.319 15:22:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:28.319 15:22:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.319 15:22:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.319 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:10:28.319 ************************************ 00:10:28.319 START TEST json_config 00:10:28.319 ************************************ 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.319 15:22:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.319 15:22:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.319 15:22:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.319 15:22:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.319 15:22:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.319 15:22:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:28.319 15:22:14 json_config -- scripts/common.sh@345 -- # : 1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.319 15:22:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.319 15:22:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@353 -- # local d=1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.319 15:22:14 json_config -- scripts/common.sh@355 -- # echo 1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.319 15:22:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@353 -- # local d=2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.319 15:22:14 json_config -- scripts/common.sh@355 -- # echo 2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.319 15:22:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.319 15:22:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.319 15:22:14 json_config -- scripts/common.sh@368 -- # return 0 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.319 15:22:14 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.319 --rc genhtml_branch_coverage=1 00:10:28.319 --rc genhtml_function_coverage=1 00:10:28.319 --rc genhtml_legend=1 00:10:28.319 --rc geninfo_all_blocks=1 00:10:28.320 --rc geninfo_unexecuted_blocks=1 00:10:28.320 00:10:28.320 ' 00:10:28.320 15:22:14 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.320 --rc genhtml_branch_coverage=1 00:10:28.320 --rc genhtml_function_coverage=1 00:10:28.320 --rc genhtml_legend=1 00:10:28.320 --rc geninfo_all_blocks=1 00:10:28.320 --rc geninfo_unexecuted_blocks=1 00:10:28.320 00:10:28.320 ' 00:10:28.320 15:22:14 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.320 --rc genhtml_branch_coverage=1 00:10:28.320 --rc genhtml_function_coverage=1 00:10:28.320 --rc genhtml_legend=1 00:10:28.320 --rc geninfo_all_blocks=1 00:10:28.320 --rc geninfo_unexecuted_blocks=1 00:10:28.320 00:10:28.320 ' 00:10:28.320 15:22:14 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.320 --rc genhtml_branch_coverage=1 00:10:28.320 --rc genhtml_function_coverage=1 00:10:28.320 --rc genhtml_legend=1 00:10:28.320 --rc geninfo_all_blocks=1 00:10:28.320 --rc geninfo_unexecuted_blocks=1 00:10:28.320 00:10:28.320 ' 00:10:28.320 15:22:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.320 15:22:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.580 15:22:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.580 15:22:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.580 15:22:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.580 15:22:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.581 15:22:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.581 15:22:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.581 15:22:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.581 15:22:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.581 15:22:14 json_config -- paths/export.sh@5 -- # export PATH 00:10:28.581 15:22:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@51 -- # : 0 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.581 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.581 15:22:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:28.581 WARNING: No tests are enabled so not running JSON configuration tests 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:28.581 15:22:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:28.581 00:10:28.581 real 0m0.203s 00:10:28.581 user 0m0.122s 00:10:28.581 sys 0m0.087s 00:10:28.581 15:22:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.581 15:22:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:28.581 ************************************ 00:10:28.581 END TEST json_config 00:10:28.581 ************************************ 00:10:28.581 15:22:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:28.581 15:22:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.581 15:22:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.581 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:10:28.581 ************************************ 00:10:28.581 START TEST json_config_extra_key 00:10:28.581 ************************************ 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.581 15:22:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.581 --rc genhtml_branch_coverage=1 00:10:28.581 --rc genhtml_function_coverage=1 00:10:28.581 --rc genhtml_legend=1 00:10:28.581 --rc geninfo_all_blocks=1 00:10:28.581 --rc geninfo_unexecuted_blocks=1 00:10:28.581 00:10:28.581 ' 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.581 --rc genhtml_branch_coverage=1 00:10:28.581 --rc genhtml_function_coverage=1 00:10:28.581 --rc genhtml_legend=1 00:10:28.581 --rc geninfo_all_blocks=1 00:10:28.581 --rc geninfo_unexecuted_blocks=1 00:10:28.581 00:10:28.581 ' 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.581 --rc genhtml_branch_coverage=1 00:10:28.581 --rc genhtml_function_coverage=1 00:10:28.581 --rc genhtml_legend=1 00:10:28.581 --rc geninfo_all_blocks=1 00:10:28.581 --rc geninfo_unexecuted_blocks=1 00:10:28.581 00:10:28.581 ' 00:10:28.581 15:22:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.581 --rc genhtml_branch_coverage=1 00:10:28.581 --rc genhtml_function_coverage=1 00:10:28.581 --rc genhtml_legend=1 00:10:28.581 --rc geninfo_all_blocks=1 00:10:28.581 --rc geninfo_unexecuted_blocks=1 00:10:28.581 00:10:28.581 ' 00:10:28.581 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.581 15:22:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=52b74f82-d2e0-4d56-b70b-48f9d2a5993a 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.850 15:22:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.850 15:22:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.850 15:22:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.850 15:22:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.850 15:22:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.850 15:22:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.850 15:22:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.850 15:22:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:28.850 15:22:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.850 15:22:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:28.850 INFO: launching applications... 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:28.850 15:22:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:28.850 15:22:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:28.850 Waiting for target to run... 00:10:28.851 15:22:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58693 00:10:28.851 15:22:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:28.851 15:22:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58693 /var/tmp/spdk_tgt.sock 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58693 ']' 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.851 15:22:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:28.851 15:22:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:28.851 [2024-11-20 15:22:14.704418] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:28.851 [2024-11-20 15:22:14.704622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58693 ] 00:10:29.416 [2024-11-20 15:22:15.157882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.416 [2024-11-20 15:22:15.345756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.350 15:22:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.350 00:10:30.350 15:22:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:30.350 INFO: shutting down applications... 00:10:30.350 15:22:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:30.350 15:22:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58693 ]] 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58693 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:30.350 15:22:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:30.918 15:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:30.918 15:22:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:30.918 15:22:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:30.918 15:22:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:31.485 15:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:31.485 15:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:31.485 15:22:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:31.485 15:22:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:32.052 15:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:32.052 15:22:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:32.052 15:22:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:32.052 15:22:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:32.311 15:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:32.311 15:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:32.311 15:22:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:32.311 15:22:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:32.981 15:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:32.981 15:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:32.981 15:22:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:32.981 15:22:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58693 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:33.566 SPDK target shutdown done 00:10:33.566 15:22:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:33.566 Success 00:10:33.566 15:22:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:33.566 00:10:33.566 real 0m4.870s 00:10:33.566 user 0m4.588s 00:10:33.566 sys 0m0.676s 00:10:33.566 15:22:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.566 ************************************ 00:10:33.566 END TEST json_config_extra_key 00:10:33.566 ************************************ 00:10:33.566 15:22:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:33.566 15:22:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:33.566 15:22:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.566 15:22:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.566 15:22:19 -- common/autotest_common.sh@10 -- # set +x 00:10:33.566 ************************************ 00:10:33.566 START TEST alias_rpc 00:10:33.566 ************************************ 00:10:33.566 15:22:19 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:33.566 * Looking for test storage... 00:10:33.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:33.566 15:22:19 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.566 15:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.566 15:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.566 15:22:19 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.566 15:22:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.566 15:22:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.567 15:22:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.567 --rc genhtml_branch_coverage=1 00:10:33.567 --rc genhtml_function_coverage=1 00:10:33.567 --rc genhtml_legend=1 00:10:33.567 --rc geninfo_all_blocks=1 00:10:33.567 --rc geninfo_unexecuted_blocks=1 00:10:33.567 00:10:33.567 ' 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.567 --rc genhtml_branch_coverage=1 00:10:33.567 --rc genhtml_function_coverage=1 00:10:33.567 --rc genhtml_legend=1 00:10:33.567 --rc geninfo_all_blocks=1 00:10:33.567 --rc geninfo_unexecuted_blocks=1 00:10:33.567 00:10:33.567 ' 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.567 --rc genhtml_branch_coverage=1 00:10:33.567 --rc genhtml_function_coverage=1 00:10:33.567 --rc genhtml_legend=1 00:10:33.567 --rc geninfo_all_blocks=1 00:10:33.567 --rc geninfo_unexecuted_blocks=1 00:10:33.567 00:10:33.567 ' 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.567 --rc genhtml_branch_coverage=1 00:10:33.567 --rc genhtml_function_coverage=1 00:10:33.567 --rc genhtml_legend=1 00:10:33.567 --rc geninfo_all_blocks=1 00:10:33.567 --rc geninfo_unexecuted_blocks=1 00:10:33.567 00:10:33.567 ' 00:10:33.567 15:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:33.567 15:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58810 00:10:33.567 15:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:33.567 15:22:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58810 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.567 15:22:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.825 [2024-11-20 15:22:19.624657] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:33.825 [2024-11-20 15:22:19.624845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:10:34.083 [2024-11-20 15:22:19.815268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.083 [2024-11-20 15:22:19.929372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.069 15:22:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.069 15:22:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:35.069 15:22:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:35.327 15:22:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58810 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58810 ']' 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58810 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58810 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.327 killing process with pid 58810 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58810' 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 58810 00:10:35.327 15:22:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 58810 00:10:37.851 00:10:37.851 real 0m4.292s 00:10:37.851 user 0m4.424s 00:10:37.851 sys 0m0.648s 00:10:37.851 15:22:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.851 ************************************ 00:10:37.851 END TEST alias_rpc 00:10:37.851 ************************************ 00:10:37.851 15:22:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.851 15:22:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:37.851 15:22:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:37.851 15:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:37.851 15:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.851 15:22:23 -- common/autotest_common.sh@10 -- # set +x 00:10:37.851 ************************************ 00:10:37.851 START TEST spdkcli_tcp 00:10:37.851 ************************************ 00:10:37.851 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:37.851 * Looking for test storage... 00:10:37.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:37.851 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.851 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.851 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.109 15:22:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.109 --rc genhtml_branch_coverage=1 00:10:38.109 --rc genhtml_function_coverage=1 00:10:38.109 --rc genhtml_legend=1 00:10:38.109 --rc geninfo_all_blocks=1 00:10:38.109 --rc geninfo_unexecuted_blocks=1 00:10:38.109 00:10:38.109 ' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.109 --rc genhtml_branch_coverage=1 00:10:38.109 --rc genhtml_function_coverage=1 00:10:38.109 --rc genhtml_legend=1 00:10:38.109 --rc geninfo_all_blocks=1 00:10:38.109 --rc geninfo_unexecuted_blocks=1 00:10:38.109 00:10:38.109 ' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.109 --rc genhtml_branch_coverage=1 00:10:38.109 --rc genhtml_function_coverage=1 00:10:38.109 --rc genhtml_legend=1 00:10:38.109 --rc geninfo_all_blocks=1 00:10:38.109 --rc geninfo_unexecuted_blocks=1 00:10:38.109 00:10:38.109 ' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.109 --rc genhtml_branch_coverage=1 00:10:38.109 --rc genhtml_function_coverage=1 00:10:38.109 --rc genhtml_legend=1 00:10:38.109 --rc geninfo_all_blocks=1 00:10:38.109 --rc geninfo_unexecuted_blocks=1 00:10:38.109 00:10:38.109 ' 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58917 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58917 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58917 ']' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.109 15:22:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.109 15:22:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.109 [2024-11-20 15:22:23.995136] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:38.109 [2024-11-20 15:22:23.995331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:10:38.367 [2024-11-20 15:22:24.191052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.367 [2024-11-20 15:22:24.308375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.367 [2024-11-20 15:22:24.308409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.300 15:22:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.300 15:22:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:39.300 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58934 00:10:39.300 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:39.300 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:39.559 [ 00:10:39.559 "bdev_malloc_delete", 00:10:39.559 "bdev_malloc_create", 00:10:39.559 "bdev_null_resize", 00:10:39.559 "bdev_null_delete", 00:10:39.559 "bdev_null_create", 00:10:39.559 "bdev_nvme_cuse_unregister", 00:10:39.559 "bdev_nvme_cuse_register", 00:10:39.559 "bdev_opal_new_user", 00:10:39.559 "bdev_opal_set_lock_state", 00:10:39.559 "bdev_opal_delete", 00:10:39.559 "bdev_opal_get_info", 00:10:39.559 "bdev_opal_create", 00:10:39.559 "bdev_nvme_opal_revert", 00:10:39.559 "bdev_nvme_opal_init", 00:10:39.559 "bdev_nvme_send_cmd", 00:10:39.559 "bdev_nvme_set_keys", 00:10:39.559 "bdev_nvme_get_path_iostat", 00:10:39.559 "bdev_nvme_get_mdns_discovery_info", 00:10:39.559 "bdev_nvme_stop_mdns_discovery", 00:10:39.559 "bdev_nvme_start_mdns_discovery", 00:10:39.559 "bdev_nvme_set_multipath_policy", 00:10:39.559 "bdev_nvme_set_preferred_path", 00:10:39.559 "bdev_nvme_get_io_paths", 00:10:39.559 "bdev_nvme_remove_error_injection", 00:10:39.559 "bdev_nvme_add_error_injection", 00:10:39.559 "bdev_nvme_get_discovery_info", 00:10:39.559 "bdev_nvme_stop_discovery", 00:10:39.559 "bdev_nvme_start_discovery", 00:10:39.559 "bdev_nvme_get_controller_health_info", 00:10:39.559 "bdev_nvme_disable_controller", 00:10:39.559 "bdev_nvme_enable_controller", 00:10:39.559 "bdev_nvme_reset_controller", 00:10:39.559 "bdev_nvme_get_transport_statistics", 00:10:39.559 "bdev_nvme_apply_firmware", 00:10:39.559 "bdev_nvme_detach_controller", 00:10:39.559 "bdev_nvme_get_controllers", 00:10:39.559 "bdev_nvme_attach_controller", 00:10:39.559 "bdev_nvme_set_hotplug", 00:10:39.559 "bdev_nvme_set_options", 00:10:39.559 "bdev_passthru_delete", 00:10:39.559 "bdev_passthru_create", 00:10:39.559 "bdev_lvol_set_parent_bdev", 00:10:39.559 "bdev_lvol_set_parent", 00:10:39.559 "bdev_lvol_check_shallow_copy", 00:10:39.559 "bdev_lvol_start_shallow_copy", 00:10:39.559 "bdev_lvol_grow_lvstore", 00:10:39.559 "bdev_lvol_get_lvols", 00:10:39.559 "bdev_lvol_get_lvstores", 00:10:39.559 "bdev_lvol_delete", 00:10:39.559 "bdev_lvol_set_read_only", 00:10:39.559 "bdev_lvol_resize", 00:10:39.559 "bdev_lvol_decouple_parent", 00:10:39.559 "bdev_lvol_inflate", 00:10:39.559 "bdev_lvol_rename", 00:10:39.559 "bdev_lvol_clone_bdev", 00:10:39.559 "bdev_lvol_clone", 00:10:39.559 "bdev_lvol_snapshot", 00:10:39.559 "bdev_lvol_create", 00:10:39.559 "bdev_lvol_delete_lvstore", 00:10:39.559 "bdev_lvol_rename_lvstore", 00:10:39.559 "bdev_lvol_create_lvstore", 00:10:39.559 "bdev_raid_set_options", 00:10:39.559 "bdev_raid_remove_base_bdev", 00:10:39.559 "bdev_raid_add_base_bdev", 00:10:39.559 "bdev_raid_delete", 00:10:39.559 "bdev_raid_create", 00:10:39.559 "bdev_raid_get_bdevs", 00:10:39.559 "bdev_error_inject_error", 00:10:39.559 "bdev_error_delete", 00:10:39.559 "bdev_error_create", 00:10:39.559 "bdev_split_delete", 00:10:39.559 "bdev_split_create", 00:10:39.559 "bdev_delay_delete", 00:10:39.559 "bdev_delay_create", 00:10:39.559 "bdev_delay_update_latency", 00:10:39.559 "bdev_zone_block_delete", 00:10:39.559 "bdev_zone_block_create", 00:10:39.559 "blobfs_create", 00:10:39.559 "blobfs_detect", 00:10:39.559 "blobfs_set_cache_size", 00:10:39.559 "bdev_xnvme_delete", 00:10:39.559 "bdev_xnvme_create", 00:10:39.559 "bdev_aio_delete", 00:10:39.559 "bdev_aio_rescan", 00:10:39.559 "bdev_aio_create", 00:10:39.559 "bdev_ftl_set_property", 00:10:39.559 "bdev_ftl_get_properties", 00:10:39.559 "bdev_ftl_get_stats", 00:10:39.560 "bdev_ftl_unmap", 00:10:39.560 "bdev_ftl_unload", 00:10:39.560 "bdev_ftl_delete", 00:10:39.560 "bdev_ftl_load", 00:10:39.560 "bdev_ftl_create", 00:10:39.560 "bdev_virtio_attach_controller", 00:10:39.560 "bdev_virtio_scsi_get_devices", 00:10:39.560 "bdev_virtio_detach_controller", 00:10:39.560 "bdev_virtio_blk_set_hotplug", 00:10:39.560 "bdev_iscsi_delete", 00:10:39.560 "bdev_iscsi_create", 00:10:39.560 "bdev_iscsi_set_options", 00:10:39.560 "accel_error_inject_error", 00:10:39.560 "ioat_scan_accel_module", 00:10:39.560 "dsa_scan_accel_module", 00:10:39.560 "iaa_scan_accel_module", 00:10:39.560 "keyring_file_remove_key", 00:10:39.560 "keyring_file_add_key", 00:10:39.560 "keyring_linux_set_options", 00:10:39.560 "fsdev_aio_delete", 00:10:39.560 "fsdev_aio_create", 00:10:39.560 "iscsi_get_histogram", 00:10:39.560 "iscsi_enable_histogram", 00:10:39.560 "iscsi_set_options", 00:10:39.560 "iscsi_get_auth_groups", 00:10:39.560 "iscsi_auth_group_remove_secret", 00:10:39.560 "iscsi_auth_group_add_secret", 00:10:39.560 "iscsi_delete_auth_group", 00:10:39.560 "iscsi_create_auth_group", 00:10:39.560 "iscsi_set_discovery_auth", 00:10:39.560 "iscsi_get_options", 00:10:39.560 "iscsi_target_node_request_logout", 00:10:39.560 "iscsi_target_node_set_redirect", 00:10:39.560 "iscsi_target_node_set_auth", 00:10:39.560 "iscsi_target_node_add_lun", 00:10:39.560 "iscsi_get_stats", 00:10:39.560 "iscsi_get_connections", 00:10:39.560 "iscsi_portal_group_set_auth", 00:10:39.560 "iscsi_start_portal_group", 00:10:39.560 "iscsi_delete_portal_group", 00:10:39.560 "iscsi_create_portal_group", 00:10:39.560 "iscsi_get_portal_groups", 00:10:39.560 "iscsi_delete_target_node", 00:10:39.560 "iscsi_target_node_remove_pg_ig_maps", 00:10:39.560 "iscsi_target_node_add_pg_ig_maps", 00:10:39.560 "iscsi_create_target_node", 00:10:39.560 "iscsi_get_target_nodes", 00:10:39.560 "iscsi_delete_initiator_group", 00:10:39.560 "iscsi_initiator_group_remove_initiators", 00:10:39.560 "iscsi_initiator_group_add_initiators", 00:10:39.560 "iscsi_create_initiator_group", 00:10:39.560 "iscsi_get_initiator_groups", 00:10:39.560 "nvmf_set_crdt", 00:10:39.560 "nvmf_set_config", 00:10:39.560 "nvmf_set_max_subsystems", 00:10:39.560 "nvmf_stop_mdns_prr", 00:10:39.560 "nvmf_publish_mdns_prr", 00:10:39.560 "nvmf_subsystem_get_listeners", 00:10:39.560 "nvmf_subsystem_get_qpairs", 00:10:39.560 "nvmf_subsystem_get_controllers", 00:10:39.560 "nvmf_get_stats", 00:10:39.560 "nvmf_get_transports", 00:10:39.560 "nvmf_create_transport", 00:10:39.560 "nvmf_get_targets", 00:10:39.560 "nvmf_delete_target", 00:10:39.560 "nvmf_create_target", 00:10:39.560 "nvmf_subsystem_allow_any_host", 00:10:39.560 "nvmf_subsystem_set_keys", 00:10:39.560 "nvmf_subsystem_remove_host", 00:10:39.560 "nvmf_subsystem_add_host", 00:10:39.560 "nvmf_ns_remove_host", 00:10:39.560 "nvmf_ns_add_host", 00:10:39.560 "nvmf_subsystem_remove_ns", 00:10:39.560 "nvmf_subsystem_set_ns_ana_group", 00:10:39.560 "nvmf_subsystem_add_ns", 00:10:39.560 "nvmf_subsystem_listener_set_ana_state", 00:10:39.560 "nvmf_discovery_get_referrals", 00:10:39.560 "nvmf_discovery_remove_referral", 00:10:39.560 "nvmf_discovery_add_referral", 00:10:39.560 "nvmf_subsystem_remove_listener", 00:10:39.560 "nvmf_subsystem_add_listener", 00:10:39.560 "nvmf_delete_subsystem", 00:10:39.560 "nvmf_create_subsystem", 00:10:39.560 "nvmf_get_subsystems", 00:10:39.560 "env_dpdk_get_mem_stats", 00:10:39.560 "nbd_get_disks", 00:10:39.560 "nbd_stop_disk", 00:10:39.560 "nbd_start_disk", 00:10:39.560 "ublk_recover_disk", 00:10:39.560 "ublk_get_disks", 00:10:39.560 "ublk_stop_disk", 00:10:39.560 "ublk_start_disk", 00:10:39.560 "ublk_destroy_target", 00:10:39.560 "ublk_create_target", 00:10:39.560 "virtio_blk_create_transport", 00:10:39.560 "virtio_blk_get_transports", 00:10:39.560 "vhost_controller_set_coalescing", 00:10:39.560 "vhost_get_controllers", 00:10:39.560 "vhost_delete_controller", 00:10:39.560 "vhost_create_blk_controller", 00:10:39.560 "vhost_scsi_controller_remove_target", 00:10:39.560 "vhost_scsi_controller_add_target", 00:10:39.560 "vhost_start_scsi_controller", 00:10:39.560 "vhost_create_scsi_controller", 00:10:39.560 "thread_set_cpumask", 00:10:39.560 "scheduler_set_options", 00:10:39.560 "framework_get_governor", 00:10:39.560 "framework_get_scheduler", 00:10:39.560 "framework_set_scheduler", 00:10:39.560 "framework_get_reactors", 00:10:39.560 "thread_get_io_channels", 00:10:39.560 "thread_get_pollers", 00:10:39.560 "thread_get_stats", 00:10:39.560 "framework_monitor_context_switch", 00:10:39.560 "spdk_kill_instance", 00:10:39.560 "log_enable_timestamps", 00:10:39.560 "log_get_flags", 00:10:39.560 "log_clear_flag", 00:10:39.560 "log_set_flag", 00:10:39.560 "log_get_level", 00:10:39.560 "log_set_level", 00:10:39.560 "log_get_print_level", 00:10:39.560 "log_set_print_level", 00:10:39.560 "framework_enable_cpumask_locks", 00:10:39.560 "framework_disable_cpumask_locks", 00:10:39.560 "framework_wait_init", 00:10:39.560 "framework_start_init", 00:10:39.560 "scsi_get_devices", 00:10:39.560 "bdev_get_histogram", 00:10:39.560 "bdev_enable_histogram", 00:10:39.560 "bdev_set_qos_limit", 00:10:39.560 "bdev_set_qd_sampling_period", 00:10:39.560 "bdev_get_bdevs", 00:10:39.560 "bdev_reset_iostat", 00:10:39.560 "bdev_get_iostat", 00:10:39.560 "bdev_examine", 00:10:39.560 "bdev_wait_for_examine", 00:10:39.560 "bdev_set_options", 00:10:39.560 "accel_get_stats", 00:10:39.560 "accel_set_options", 00:10:39.560 "accel_set_driver", 00:10:39.560 "accel_crypto_key_destroy", 00:10:39.560 "accel_crypto_keys_get", 00:10:39.560 "accel_crypto_key_create", 00:10:39.560 "accel_assign_opc", 00:10:39.560 "accel_get_module_info", 00:10:39.560 "accel_get_opc_assignments", 00:10:39.560 "vmd_rescan", 00:10:39.560 "vmd_remove_device", 00:10:39.560 "vmd_enable", 00:10:39.560 "sock_get_default_impl", 00:10:39.560 "sock_set_default_impl", 00:10:39.560 "sock_impl_set_options", 00:10:39.560 "sock_impl_get_options", 00:10:39.560 "iobuf_get_stats", 00:10:39.560 "iobuf_set_options", 00:10:39.560 "keyring_get_keys", 00:10:39.560 "framework_get_pci_devices", 00:10:39.560 "framework_get_config", 00:10:39.560 "framework_get_subsystems", 00:10:39.560 "fsdev_set_opts", 00:10:39.560 "fsdev_get_opts", 00:10:39.560 "trace_get_info", 00:10:39.560 "trace_get_tpoint_group_mask", 00:10:39.560 "trace_disable_tpoint_group", 00:10:39.560 "trace_enable_tpoint_group", 00:10:39.560 "trace_clear_tpoint_mask", 00:10:39.560 "trace_set_tpoint_mask", 00:10:39.560 "notify_get_notifications", 00:10:39.560 "notify_get_types", 00:10:39.560 "spdk_get_version", 00:10:39.560 "rpc_get_methods" 00:10:39.560 ] 00:10:39.560 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.560 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:39.560 15:22:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58917 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58917 ']' 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58917 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58917 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.560 killing process with pid 58917 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58917' 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58917 00:10:39.560 15:22:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58917 00:10:42.172 00:10:42.172 real 0m4.317s 00:10:42.172 user 0m7.713s 00:10:42.172 sys 0m0.719s 00:10:42.172 15:22:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.172 ************************************ 00:10:42.172 END TEST spdkcli_tcp 00:10:42.172 ************************************ 00:10:42.172 15:22:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.172 15:22:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:42.172 15:22:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.172 15:22:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.172 15:22:28 -- common/autotest_common.sh@10 -- # set +x 00:10:42.172 ************************************ 00:10:42.172 START TEST dpdk_mem_utility 00:10:42.172 ************************************ 00:10:42.172 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:42.172 * Looking for test storage... 00:10:42.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:42.172 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.172 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.172 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.431 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.431 15:22:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.432 15:22:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.432 --rc genhtml_branch_coverage=1 00:10:42.432 --rc genhtml_function_coverage=1 00:10:42.432 --rc genhtml_legend=1 00:10:42.432 --rc geninfo_all_blocks=1 00:10:42.432 --rc geninfo_unexecuted_blocks=1 00:10:42.432 00:10:42.432 ' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.432 --rc genhtml_branch_coverage=1 00:10:42.432 --rc genhtml_function_coverage=1 00:10:42.432 --rc genhtml_legend=1 00:10:42.432 --rc geninfo_all_blocks=1 00:10:42.432 --rc geninfo_unexecuted_blocks=1 00:10:42.432 00:10:42.432 ' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.432 --rc genhtml_branch_coverage=1 00:10:42.432 --rc genhtml_function_coverage=1 00:10:42.432 --rc genhtml_legend=1 00:10:42.432 --rc geninfo_all_blocks=1 00:10:42.432 --rc geninfo_unexecuted_blocks=1 00:10:42.432 00:10:42.432 ' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.432 --rc genhtml_branch_coverage=1 00:10:42.432 --rc genhtml_function_coverage=1 00:10:42.432 --rc genhtml_legend=1 00:10:42.432 --rc geninfo_all_blocks=1 00:10:42.432 --rc geninfo_unexecuted_blocks=1 00:10:42.432 00:10:42.432 ' 00:10:42.432 15:22:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:42.432 15:22:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59039 00:10:42.432 15:22:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.432 15:22:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59039 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59039 ']' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.432 15:22:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:42.432 [2024-11-20 15:22:28.360517] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:42.432 [2024-11-20 15:22:28.360708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:10:42.691 [2024-11-20 15:22:28.551142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.951 [2024-11-20 15:22:28.670350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.888 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.888 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:43.888 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:43.888 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:43.888 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.888 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:43.888 { 00:10:43.888 "filename": "/tmp/spdk_mem_dump.txt" 00:10:43.888 } 00:10:43.888 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.888 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:43.888 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:43.888 1 heaps totaling size 824.000000 MiB 00:10:43.888 size: 824.000000 MiB heap id: 0 00:10:43.888 end heaps---------- 00:10:43.888 9 mempools totaling size 603.782043 MiB 00:10:43.888 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:43.888 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:43.888 size: 100.555481 MiB name: bdev_io_59039 00:10:43.888 size: 50.003479 MiB name: msgpool_59039 00:10:43.888 size: 36.509338 MiB name: fsdev_io_59039 00:10:43.888 size: 21.763794 MiB name: PDU_Pool 00:10:43.888 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:43.888 size: 4.133484 MiB name: evtpool_59039 00:10:43.888 size: 0.026123 MiB name: Session_Pool 00:10:43.888 end mempools------- 00:10:43.888 6 memzones totaling size 4.142822 MiB 00:10:43.888 size: 1.000366 MiB name: RG_ring_0_59039 00:10:43.888 size: 1.000366 MiB name: RG_ring_1_59039 00:10:43.889 size: 1.000366 MiB name: RG_ring_4_59039 00:10:43.889 size: 1.000366 MiB name: RG_ring_5_59039 00:10:43.889 size: 0.125366 MiB name: RG_ring_2_59039 00:10:43.889 size: 0.015991 MiB name: RG_ring_3_59039 00:10:43.889 end memzones------- 00:10:43.889 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:43.889 heap id: 0 total size: 824.000000 MiB number of busy elements: 314 number of free elements: 18 00:10:43.889 list of free elements. size: 16.781616 MiB 00:10:43.889 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:43.889 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:43.889 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:43.889 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:43.889 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:43.889 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:43.889 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:43.889 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:43.889 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:43.889 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:43.889 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:43.889 element at address: 0x20001b400000 with size: 0.563171 MiB 00:10:43.889 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:43.889 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:43.889 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:43.889 element at address: 0x200012c00000 with size: 0.433228 MiB 00:10:43.889 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:43.889 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:43.889 list of standard malloc elements. size: 199.287476 MiB 00:10:43.889 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:43.889 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:43.889 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:43.889 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:43.889 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:43.889 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:43.889 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:43.889 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:43.889 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:43.889 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:43.889 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:43.889 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:43.889 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:43.889 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:43.890 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:43.890 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:43.891 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:43.891 list of memzone associated elements. size: 607.930908 MiB 00:10:43.891 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:43.891 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:43.891 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:43.891 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:43.891 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:43.891 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59039_0 00:10:43.891 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:43.891 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59039_0 00:10:43.891 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:43.891 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59039_0 00:10:43.891 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:43.891 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:43.891 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:43.891 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:43.891 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:43.891 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59039_0 00:10:43.891 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:43.891 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59039 00:10:43.891 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:43.891 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59039 00:10:43.891 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:43.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:43.891 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:43.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:43.891 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:43.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:43.891 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:43.891 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:43.891 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:43.891 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59039 00:10:43.891 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:43.891 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59039 00:10:43.891 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:43.891 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59039 00:10:43.891 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:43.891 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59039 00:10:43.891 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:43.891 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59039 00:10:43.891 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:43.891 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59039 00:10:43.891 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:43.891 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:43.891 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:43.891 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:43.891 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:43.891 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:43.891 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:43.891 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59039 00:10:43.891 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:43.891 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59039 00:10:43.891 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:43.891 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:43.891 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:43.891 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:43.891 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:43.891 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59039 00:10:43.891 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:43.891 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:43.891 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:43.891 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59039 00:10:43.891 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:43.891 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59039 00:10:43.891 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:43.891 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59039 00:10:43.891 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:43.891 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:43.891 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:43.891 15:22:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59039 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59039 ']' 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59039 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59039 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.891 killing process with pid 59039 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59039' 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59039 00:10:43.891 15:22:29 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59039 00:10:46.427 00:10:46.427 real 0m4.060s 00:10:46.427 user 0m4.023s 00:10:46.427 sys 0m0.626s 00:10:46.427 ************************************ 00:10:46.427 END TEST dpdk_mem_utility 00:10:46.427 15:22:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.427 15:22:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:46.427 ************************************ 00:10:46.427 15:22:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:46.427 15:22:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.427 15:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.427 15:22:32 -- common/autotest_common.sh@10 -- # set +x 00:10:46.427 ************************************ 00:10:46.427 START TEST event 00:10:46.427 ************************************ 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:46.427 * Looking for test storage... 00:10:46.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.427 15:22:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.427 15:22:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.427 15:22:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.427 15:22:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.427 15:22:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.427 15:22:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.427 15:22:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.427 15:22:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.427 15:22:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.427 15:22:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.427 15:22:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.427 15:22:32 event -- scripts/common.sh@344 -- # case "$op" in 00:10:46.427 15:22:32 event -- scripts/common.sh@345 -- # : 1 00:10:46.427 15:22:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.427 15:22:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.427 15:22:32 event -- scripts/common.sh@365 -- # decimal 1 00:10:46.427 15:22:32 event -- scripts/common.sh@353 -- # local d=1 00:10:46.427 15:22:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.427 15:22:32 event -- scripts/common.sh@355 -- # echo 1 00:10:46.427 15:22:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.427 15:22:32 event -- scripts/common.sh@366 -- # decimal 2 00:10:46.427 15:22:32 event -- scripts/common.sh@353 -- # local d=2 00:10:46.427 15:22:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.427 15:22:32 event -- scripts/common.sh@355 -- # echo 2 00:10:46.427 15:22:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.427 15:22:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.427 15:22:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.427 15:22:32 event -- scripts/common.sh@368 -- # return 0 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.427 --rc genhtml_branch_coverage=1 00:10:46.427 --rc genhtml_function_coverage=1 00:10:46.427 --rc genhtml_legend=1 00:10:46.427 --rc geninfo_all_blocks=1 00:10:46.427 --rc geninfo_unexecuted_blocks=1 00:10:46.427 00:10:46.427 ' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.427 --rc genhtml_branch_coverage=1 00:10:46.427 --rc genhtml_function_coverage=1 00:10:46.427 --rc genhtml_legend=1 00:10:46.427 --rc geninfo_all_blocks=1 00:10:46.427 --rc geninfo_unexecuted_blocks=1 00:10:46.427 00:10:46.427 ' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.427 --rc genhtml_branch_coverage=1 00:10:46.427 --rc genhtml_function_coverage=1 00:10:46.427 --rc genhtml_legend=1 00:10:46.427 --rc geninfo_all_blocks=1 00:10:46.427 --rc geninfo_unexecuted_blocks=1 00:10:46.427 00:10:46.427 ' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.427 --rc genhtml_branch_coverage=1 00:10:46.427 --rc genhtml_function_coverage=1 00:10:46.427 --rc genhtml_legend=1 00:10:46.427 --rc geninfo_all_blocks=1 00:10:46.427 --rc geninfo_unexecuted_blocks=1 00:10:46.427 00:10:46.427 ' 00:10:46.427 15:22:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:46.427 15:22:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:46.427 15:22:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:46.427 15:22:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.427 15:22:32 event -- common/autotest_common.sh@10 -- # set +x 00:10:46.427 ************************************ 00:10:46.427 START TEST event_perf 00:10:46.427 ************************************ 00:10:46.427 15:22:32 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:46.687 Running I/O for 1 seconds...[2024-11-20 15:22:32.414887] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:46.687 [2024-11-20 15:22:32.415197] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:10:46.687 [2024-11-20 15:22:32.603612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.945 [2024-11-20 15:22:32.730369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.945 [2024-11-20 15:22:32.730444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.945 [2024-11-20 15:22:32.730527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.945 Running I/O for 1 seconds...[2024-11-20 15:22:32.730555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.341 00:10:48.341 lcore 0: 196452 00:10:48.341 lcore 1: 196450 00:10:48.341 lcore 2: 196452 00:10:48.341 lcore 3: 196452 00:10:48.341 done. 00:10:48.341 00:10:48.341 real 0m1.611s 00:10:48.341 ************************************ 00:10:48.341 END TEST event_perf 00:10:48.341 ************************************ 00:10:48.341 user 0m4.363s 00:10:48.341 sys 0m0.129s 00:10:48.341 15:22:33 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.341 15:22:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:48.341 15:22:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:48.341 15:22:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.341 15:22:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.341 15:22:34 event -- common/autotest_common.sh@10 -- # set +x 00:10:48.341 ************************************ 00:10:48.341 START TEST event_reactor 00:10:48.341 ************************************ 00:10:48.342 15:22:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:48.342 [2024-11-20 15:22:34.067164] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:48.342 [2024-11-20 15:22:34.067472] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59192 ] 00:10:48.342 [2024-11-20 15:22:34.239329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.601 [2024-11-20 15:22:34.361107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.982 test_start 00:10:49.982 oneshot 00:10:49.982 tick 100 00:10:49.982 tick 100 00:10:49.982 tick 250 00:10:49.982 tick 100 00:10:49.982 tick 100 00:10:49.982 tick 250 00:10:49.982 tick 100 00:10:49.982 tick 500 00:10:49.982 tick 100 00:10:49.982 tick 100 00:10:49.982 tick 250 00:10:49.982 tick 100 00:10:49.982 tick 100 00:10:49.982 test_end 00:10:49.982 ************************************ 00:10:49.982 END TEST event_reactor 00:10:49.982 ************************************ 00:10:49.982 00:10:49.982 real 0m1.563s 00:10:49.982 user 0m1.364s 00:10:49.982 sys 0m0.090s 00:10:49.982 15:22:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.982 15:22:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 15:22:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:49.982 15:22:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.982 15:22:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.982 15:22:35 event -- common/autotest_common.sh@10 -- # set +x 00:10:49.982 ************************************ 00:10:49.982 START TEST event_reactor_perf 00:10:49.982 ************************************ 00:10:49.982 15:22:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:49.982 [2024-11-20 15:22:35.700098] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:49.982 [2024-11-20 15:22:35.700251] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59223 ] 00:10:49.982 [2024-11-20 15:22:35.893078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.241 [2024-11-20 15:22:36.010281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.618 test_start 00:10:51.618 test_end 00:10:51.618 Performance: 317802 events per second 00:10:51.618 00:10:51.618 real 0m1.630s 00:10:51.618 user 0m1.382s 00:10:51.618 sys 0m0.137s 00:10:51.618 15:22:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.618 15:22:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:51.618 ************************************ 00:10:51.618 END TEST event_reactor_perf 00:10:51.618 ************************************ 00:10:51.618 15:22:37 event -- event/event.sh@49 -- # uname -s 00:10:51.618 15:22:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:51.618 15:22:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:51.618 15:22:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.618 15:22:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.618 15:22:37 event -- common/autotest_common.sh@10 -- # set +x 00:10:51.618 ************************************ 00:10:51.618 START TEST event_scheduler 00:10:51.618 ************************************ 00:10:51.618 15:22:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:51.618 * Looking for test storage... 00:10:51.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:51.618 15:22:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.618 15:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.618 15:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.618 15:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:51.618 15:22:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.619 15:22:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.619 --rc genhtml_branch_coverage=1 00:10:51.619 --rc genhtml_function_coverage=1 00:10:51.619 --rc genhtml_legend=1 00:10:51.619 --rc geninfo_all_blocks=1 00:10:51.619 --rc geninfo_unexecuted_blocks=1 00:10:51.619 00:10:51.619 ' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.619 --rc genhtml_branch_coverage=1 00:10:51.619 --rc genhtml_function_coverage=1 00:10:51.619 --rc genhtml_legend=1 00:10:51.619 --rc geninfo_all_blocks=1 00:10:51.619 --rc geninfo_unexecuted_blocks=1 00:10:51.619 00:10:51.619 ' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.619 --rc genhtml_branch_coverage=1 00:10:51.619 --rc genhtml_function_coverage=1 00:10:51.619 --rc genhtml_legend=1 00:10:51.619 --rc geninfo_all_blocks=1 00:10:51.619 --rc geninfo_unexecuted_blocks=1 00:10:51.619 00:10:51.619 ' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.619 --rc genhtml_branch_coverage=1 00:10:51.619 --rc genhtml_function_coverage=1 00:10:51.619 --rc genhtml_legend=1 00:10:51.619 --rc geninfo_all_blocks=1 00:10:51.619 --rc geninfo_unexecuted_blocks=1 00:10:51.619 00:10:51.619 ' 00:10:51.619 15:22:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:51.619 15:22:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59299 00:10:51.619 15:22:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:51.619 15:22:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:51.619 15:22:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59299 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.619 15:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:51.877 [2024-11-20 15:22:37.681499] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:51.877 [2024-11-20 15:22:37.681994] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59299 ] 00:10:52.135 [2024-11-20 15:22:37.885868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.135 [2024-11-20 15:22:38.083476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.135 [2024-11-20 15:22:38.083595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.135 [2024-11-20 15:22:38.083663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.135 [2024-11-20 15:22:38.083683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.071 15:22:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.071 15:22:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:53.071 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:53.071 15:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.071 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:53.071 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.071 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.071 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.071 POWER: Cannot set governor of lcore 0 to performance 00:10:53.071 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.071 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.071 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.071 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.071 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:53.072 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:53.072 POWER: Unable to set Power Management Environment for lcore 0 00:10:53.072 [2024-11-20 15:22:38.759811] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:53.072 [2024-11-20 15:22:38.759839] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:53.072 [2024-11-20 15:22:38.759854] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:53.072 [2024-11-20 15:22:38.759875] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:53.072 [2024-11-20 15:22:38.759886] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:53.072 [2024-11-20 15:22:38.759898] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:53.072 15:22:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.072 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:53.072 15:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.072 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:53.330 [2024-11-20 15:22:39.101965] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:53.330 15:22:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.330 15:22:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:53.330 15:22:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.331 15:22:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 ************************************ 00:10:53.331 START TEST scheduler_create_thread 00:10:53.331 ************************************ 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 2 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 3 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 4 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 5 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 6 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 7 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 8 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 9 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 10 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.331 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.898 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.898 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:53.898 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.898 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:55.301 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.301 15:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:55.301 15:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:55.301 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.301 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.678 ************************************ 00:10:56.678 END TEST scheduler_create_thread 00:10:56.678 ************************************ 00:10:56.678 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.678 00:10:56.678 real 0m3.100s 00:10:56.678 user 0m0.023s 00:10:56.678 sys 0m0.006s 00:10:56.678 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.678 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.678 15:22:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:56.678 15:22:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59299 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59299 ']' 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59299 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59299 00:10:56.678 killing process with pid 59299 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59299' 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59299 00:10:56.678 15:22:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59299 00:10:56.678 [2024-11-20 15:22:42.597065] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:58.056 00:10:58.056 real 0m6.477s 00:10:58.056 user 0m13.390s 00:10:58.056 sys 0m0.556s 00:10:58.056 15:22:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.056 15:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 ************************************ 00:10:58.056 END TEST event_scheduler 00:10:58.056 ************************************ 00:10:58.056 15:22:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:58.056 15:22:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:58.056 15:22:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.056 15:22:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.056 15:22:43 event -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 ************************************ 00:10:58.056 START TEST app_repeat 00:10:58.056 ************************************ 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:58.056 Process app_repeat pid: 59416 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59416 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59416' 00:10:58.056 spdk_app_start Round 0 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:58.056 15:22:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59416 /var/tmp/spdk-nbd.sock 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59416 ']' 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:58.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.056 15:22:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.056 [2024-11-20 15:22:43.953686] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:58.056 [2024-11-20 15:22:43.953805] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59416 ] 00:10:58.315 [2024-11-20 15:22:44.128556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.315 [2024-11-20 15:22:44.250172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.315 [2024-11-20 15:22:44.250201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.884 15:22:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.884 15:22:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:58.884 15:22:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:59.142 Malloc0 00:10:59.142 15:22:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:59.402 Malloc1 00:10:59.662 15:22:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:59.662 /dev/nbd0 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:59.662 15:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.662 15:22:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:59.921 1+0 records in 00:10:59.921 1+0 records out 00:10:59.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435342 s, 9.4 MB/s 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.921 15:22:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:59.921 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.921 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.922 15:22:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:00.181 /dev/nbd1 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:00.181 1+0 records in 00:11:00.181 1+0 records out 00:11:00.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262038 s, 15.6 MB/s 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.181 15:22:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.181 15:22:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:00.440 { 00:11:00.440 "nbd_device": "/dev/nbd0", 00:11:00.440 "bdev_name": "Malloc0" 00:11:00.440 }, 00:11:00.440 { 00:11:00.440 "nbd_device": "/dev/nbd1", 00:11:00.440 "bdev_name": "Malloc1" 00:11:00.440 } 00:11:00.440 ]' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:00.440 { 00:11:00.440 "nbd_device": "/dev/nbd0", 00:11:00.440 "bdev_name": "Malloc0" 00:11:00.440 }, 00:11:00.440 { 00:11:00.440 "nbd_device": "/dev/nbd1", 00:11:00.440 "bdev_name": "Malloc1" 00:11:00.440 } 00:11:00.440 ]' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:00.440 /dev/nbd1' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:00.440 /dev/nbd1' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.440 15:22:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:00.441 256+0 records in 00:11:00.441 256+0 records out 00:11:00.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00823958 s, 127 MB/s 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:00.441 256+0 records in 00:11:00.441 256+0 records out 00:11:00.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029507 s, 35.5 MB/s 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:00.441 256+0 records in 00:11:00.441 256+0 records out 00:11:00.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319359 s, 32.8 MB/s 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.441 15:22:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.700 15:22:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.959 15:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:01.217 15:22:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:01.217 15:22:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:01.790 15:22:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:03.176 [2024-11-20 15:22:48.808365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.176 [2024-11-20 15:22:48.922605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.176 [2024-11-20 15:22:48.922606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.176 [2024-11-20 15:22:49.121689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:03.176 [2024-11-20 15:22:49.121789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:05.079 spdk_app_start Round 1 00:11:05.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:05.079 15:22:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:05.079 15:22:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:05.079 15:22:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59416 /var/tmp/spdk-nbd.sock 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59416 ']' 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.079 15:22:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:05.079 15:22:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.338 Malloc0 00:11:05.338 15:22:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.597 Malloc1 00:11:05.597 15:22:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.597 15:22:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.597 15:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:05.857 /dev/nbd0 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.857 1+0 records in 00:11:05.857 1+0 records out 00:11:05.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240738 s, 17.0 MB/s 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:05.857 15:22:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.857 15:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:06.117 /dev/nbd1 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.117 1+0 records in 00:11:06.117 1+0 records out 00:11:06.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244442 s, 16.8 MB/s 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.117 15:22:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.117 15:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:06.684 { 00:11:06.684 "nbd_device": "/dev/nbd0", 00:11:06.684 "bdev_name": "Malloc0" 00:11:06.684 }, 00:11:06.684 { 00:11:06.684 "nbd_device": "/dev/nbd1", 00:11:06.684 "bdev_name": "Malloc1" 00:11:06.684 } 00:11:06.684 ]' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:06.684 { 00:11:06.684 "nbd_device": "/dev/nbd0", 00:11:06.684 "bdev_name": "Malloc0" 00:11:06.684 }, 00:11:06.684 { 00:11:06.684 "nbd_device": "/dev/nbd1", 00:11:06.684 "bdev_name": "Malloc1" 00:11:06.684 } 00:11:06.684 ]' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:06.684 /dev/nbd1' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:06.684 /dev/nbd1' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.684 15:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:06.685 256+0 records in 00:11:06.685 256+0 records out 00:11:06.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743061 s, 141 MB/s 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:06.685 256+0 records in 00:11:06.685 256+0 records out 00:11:06.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282191 s, 37.2 MB/s 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:06.685 256+0 records in 00:11:06.685 256+0 records out 00:11:06.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025863 s, 40.5 MB/s 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.685 15:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.944 15:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.203 15:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.462 15:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:07.722 15:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:07.722 15:22:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:07.722 15:22:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:07.722 15:22:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:07.722 15:22:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:07.722 15:22:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:07.980 15:22:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:09.389 [2024-11-20 15:22:55.012328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.389 [2024-11-20 15:22:55.125152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.389 [2024-11-20 15:22:55.125158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.389 [2024-11-20 15:22:55.323131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:09.389 [2024-11-20 15:22:55.323227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:11.292 spdk_app_start Round 2 00:11:11.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:11.292 15:22:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:11.292 15:22:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:11.292 15:22:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59416 /var/tmp/spdk-nbd.sock 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59416 ']' 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.292 15:22:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:11.292 15:22:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.292 15:22:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:11.292 15:22:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.550 Malloc0 00:11:11.550 15:22:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.808 Malloc1 00:11:11.808 15:22:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.808 15:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:12.067 /dev/nbd0 00:11:12.067 15:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.067 15:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.067 1+0 records in 00:11:12.067 1+0 records out 00:11:12.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318534 s, 12.9 MB/s 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.067 15:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:12.067 15:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.067 15:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.067 15:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:12.326 /dev/nbd1 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.326 1+0 records in 00:11:12.326 1+0 records out 00:11:12.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298036 s, 13.7 MB/s 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.326 15:22:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.326 15:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:12.891 { 00:11:12.891 "nbd_device": "/dev/nbd0", 00:11:12.891 "bdev_name": "Malloc0" 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "nbd_device": "/dev/nbd1", 00:11:12.891 "bdev_name": "Malloc1" 00:11:12.891 } 00:11:12.891 ]' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:12.891 { 00:11:12.891 "nbd_device": "/dev/nbd0", 00:11:12.891 "bdev_name": "Malloc0" 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "nbd_device": "/dev/nbd1", 00:11:12.891 "bdev_name": "Malloc1" 00:11:12.891 } 00:11:12.891 ]' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:12.891 /dev/nbd1' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:12.891 /dev/nbd1' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:12.891 256+0 records in 00:11:12.891 256+0 records out 00:11:12.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00922125 s, 114 MB/s 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:12.891 256+0 records in 00:11:12.891 256+0 records out 00:11:12.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233719 s, 44.9 MB/s 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:12.891 256+0 records in 00:11:12.891 256+0 records out 00:11:12.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349911 s, 30.0 MB/s 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:12.891 15:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.892 15:22:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.150 15:22:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.420 15:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.420 15:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.420 15:22:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.420 15:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.420 15:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.421 15:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.686 15:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.686 15:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:13.687 15:22:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:13.687 15:22:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:13.946 15:22:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:15.323 [2024-11-20 15:23:01.009878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.323 [2024-11-20 15:23:01.123942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.323 [2024-11-20 15:23:01.123947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.582 [2024-11-20 15:23:01.324095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:15.582 [2024-11-20 15:23:01.324213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:16.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:16.960 15:23:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59416 /var/tmp/spdk-nbd.sock 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59416 ']' 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.960 15:23:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:17.219 15:23:03 event.app_repeat -- event/event.sh@39 -- # killprocess 59416 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59416 ']' 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59416 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59416 00:11:17.219 killing process with pid 59416 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59416' 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59416 00:11:17.219 15:23:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59416 00:11:18.595 spdk_app_start is called in Round 0. 00:11:18.595 Shutdown signal received, stop current app iteration 00:11:18.595 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:11:18.595 spdk_app_start is called in Round 1. 00:11:18.595 Shutdown signal received, stop current app iteration 00:11:18.595 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:11:18.595 spdk_app_start is called in Round 2. 00:11:18.595 Shutdown signal received, stop current app iteration 00:11:18.595 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:11:18.595 spdk_app_start is called in Round 3. 00:11:18.595 Shutdown signal received, stop current app iteration 00:11:18.595 15:23:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:18.595 15:23:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:18.595 00:11:18.595 real 0m20.297s 00:11:18.595 user 0m43.614s 00:11:18.595 sys 0m3.331s 00:11:18.595 15:23:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.595 15:23:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:18.595 ************************************ 00:11:18.595 END TEST app_repeat 00:11:18.595 ************************************ 00:11:18.595 15:23:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:18.595 15:23:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.595 15:23:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.595 15:23:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.595 15:23:04 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.595 ************************************ 00:11:18.595 START TEST cpu_locks 00:11:18.595 ************************************ 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.595 * Looking for test storage... 00:11:18.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.595 15:23:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.595 --rc geninfo_all_blocks=1 00:11:18.595 --rc geninfo_unexecuted_blocks=1 00:11:18.595 00:11:18.595 ' 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.595 --rc geninfo_all_blocks=1 00:11:18.595 --rc geninfo_unexecuted_blocks=1 00:11:18.595 00:11:18.595 ' 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.595 --rc geninfo_all_blocks=1 00:11:18.595 --rc geninfo_unexecuted_blocks=1 00:11:18.595 00:11:18.595 ' 00:11:18.595 15:23:04 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.595 --rc genhtml_branch_coverage=1 00:11:18.595 --rc genhtml_function_coverage=1 00:11:18.595 --rc genhtml_legend=1 00:11:18.595 --rc geninfo_all_blocks=1 00:11:18.595 --rc geninfo_unexecuted_blocks=1 00:11:18.595 00:11:18.595 ' 00:11:18.595 15:23:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:18.595 15:23:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:18.596 15:23:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:18.596 15:23:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:18.596 15:23:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.596 15:23:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.596 15:23:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.596 ************************************ 00:11:18.596 START TEST default_locks 00:11:18.596 ************************************ 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59876 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59876 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59876 ']' 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:18.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.596 15:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.854 [2024-11-20 15:23:04.570633] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:18.854 [2024-11-20 15:23:04.570759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:11:18.854 [2024-11-20 15:23:04.742202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.113 [2024-11-20 15:23:04.858816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.048 15:23:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.048 15:23:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:20.048 15:23:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59876 00:11:20.048 15:23:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.048 15:23:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59876 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59876 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59876 ']' 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59876 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.306 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59876 00:11:20.566 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.566 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.566 killing process with pid 59876 00:11:20.566 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59876' 00:11:20.566 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59876 00:11:20.566 15:23:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59876 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59876 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59876 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59876 00:11:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59876 ']' 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.128 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.129 ERROR: process (pid: 59876) is no longer running 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.129 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59876) - No such process 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:23.129 00:11:23.129 real 0m4.230s 00:11:23.129 user 0m4.299s 00:11:23.129 sys 0m0.732s 00:11:23.129 ************************************ 00:11:23.129 END TEST default_locks 00:11:23.129 ************************************ 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.129 15:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.129 15:23:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:23.129 15:23:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.129 15:23:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.129 15:23:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.129 ************************************ 00:11:23.129 START TEST default_locks_via_rpc 00:11:23.129 ************************************ 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:23.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59957 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59957 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59957 ']' 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.129 15:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.129 [2024-11-20 15:23:08.856872] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:23.129 [2024-11-20 15:23:08.857017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:11:23.129 [2024-11-20 15:23:09.025698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.387 [2024-11-20 15:23:09.140611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59957 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:24.324 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59957 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59957 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59957 ']' 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59957 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59957 00:11:24.582 killing process with pid 59957 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59957' 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59957 00:11:24.582 15:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59957 00:11:27.125 00:11:27.125 real 0m4.127s 00:11:27.125 user 0m4.162s 00:11:27.125 sys 0m0.646s 00:11:27.125 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.125 15:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.125 ************************************ 00:11:27.125 END TEST default_locks_via_rpc 00:11:27.125 ************************************ 00:11:27.125 15:23:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:27.125 15:23:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.125 15:23:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.125 15:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:27.125 ************************************ 00:11:27.125 START TEST non_locking_app_on_locked_coremask 00:11:27.125 ************************************ 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:27.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60036 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60036 /var/tmp/spdk.sock 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60036 ']' 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.125 15:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:27.399 [2024-11-20 15:23:13.100587] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:27.399 [2024-11-20 15:23:13.101052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:11:27.399 [2024-11-20 15:23:13.297701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.658 [2024-11-20 15:23:13.415572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60052 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60052 /var/tmp/spdk2.sock 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60052 ']' 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:28.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.594 15:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:28.594 [2024-11-20 15:23:14.433316] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:28.594 [2024-11-20 15:23:14.433768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:11:28.853 [2024-11-20 15:23:14.628030] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:28.853 [2024-11-20 15:23:14.628087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.111 [2024-11-20 15:23:14.863329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.016 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.016 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:31.016 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60036 00:11:31.016 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:31.016 15:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60036 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60036 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60036 ']' 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60036 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.392 15:23:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60036 00:11:32.392 15:23:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.392 15:23:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.392 15:23:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60036' 00:11:32.392 killing process with pid 60036 00:11:32.392 15:23:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60036 00:11:32.392 15:23:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60036 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60052 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60052 ']' 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60052 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60052 00:11:37.680 killing process with pid 60052 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60052' 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60052 00:11:37.680 15:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60052 00:11:39.585 ************************************ 00:11:39.585 END TEST non_locking_app_on_locked_coremask 00:11:39.585 ************************************ 00:11:39.585 00:11:39.585 real 0m12.378s 00:11:39.585 user 0m12.787s 00:11:39.585 sys 0m1.583s 00:11:39.585 15:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.585 15:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 15:23:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:39.585 15:23:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.585 15:23:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.585 15:23:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 ************************************ 00:11:39.585 START TEST locking_app_on_unlocked_coremask 00:11:39.585 ************************************ 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60211 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60211 /var/tmp/spdk.sock 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60211 ']' 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.585 15:23:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 [2024-11-20 15:23:25.537104] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:39.585 [2024-11-20 15:23:25.537608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60211 ] 00:11:39.844 [2024-11-20 15:23:25.730915] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:39.844 [2024-11-20 15:23:25.731145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.102 [2024-11-20 15:23:25.847350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60227 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60227 /var/tmp/spdk2.sock 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60227 ']' 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.039 15:23:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.039 [2024-11-20 15:23:26.851406] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:41.039 [2024-11-20 15:23:26.851615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:11:41.298 [2024-11-20 15:23:27.036299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.557 [2024-11-20 15:23:27.264219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.091 15:23:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.092 15:23:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:44.092 15:23:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60227 00:11:44.092 15:23:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60227 00:11:44.092 15:23:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60211 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60211 ']' 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60211 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60211 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60211' 00:11:45.025 killing process with pid 60211 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60211 00:11:45.025 15:23:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60211 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60227 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60227 ']' 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60227 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60227 00:11:51.580 killing process with pid 60227 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60227' 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60227 00:11:51.580 15:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60227 00:11:53.484 00:11:53.484 real 0m13.827s 00:11:53.484 user 0m14.409s 00:11:53.484 sys 0m1.608s 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.484 ************************************ 00:11:53.484 END TEST locking_app_on_unlocked_coremask 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 ************************************ 00:11:53.484 15:23:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:53.484 15:23:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.484 15:23:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.484 15:23:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 ************************************ 00:11:53.484 START TEST locking_app_on_locked_coremask 00:11:53.484 ************************************ 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60397 00:11:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60397 /var/tmp/spdk.sock 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60397 ']' 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.484 15:23:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 [2024-11-20 15:23:39.429350] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:53.484 [2024-11-20 15:23:39.429531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:11:53.742 [2024-11-20 15:23:39.622661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.001 [2024-11-20 15:23:39.739873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60419 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60419 /var/tmp/spdk2.sock 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60419 /var/tmp/spdk2.sock 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60419 /var/tmp/spdk2.sock 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60419 ']' 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:54.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.936 15:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.937 [2024-11-20 15:23:40.758849] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:54.937 [2024-11-20 15:23:40.759330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60419 ] 00:11:55.194 [2024-11-20 15:23:40.962975] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60397 has claimed it. 00:11:55.194 [2024-11-20 15:23:40.963040] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:55.453 ERROR: process (pid: 60419) is no longer running 00:11:55.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60419) - No such process 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60397 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60397 00:11:55.453 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60397 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60397 ']' 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60397 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60397 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.020 killing process with pid 60397 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60397' 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60397 00:11:56.020 15:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60397 00:11:58.548 00:11:58.548 real 0m4.987s 00:11:58.548 user 0m5.204s 00:11:58.548 sys 0m0.930s 00:11:58.548 15:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.548 ************************************ 00:11:58.548 END TEST locking_app_on_locked_coremask 00:11:58.548 ************************************ 00:11:58.548 15:23:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:58.548 15:23:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:58.548 15:23:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.548 15:23:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.548 15:23:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:58.548 ************************************ 00:11:58.548 START TEST locking_overlapped_coremask 00:11:58.548 ************************************ 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60488 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60488 /var/tmp/spdk.sock 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60488 ']' 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.548 15:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:58.548 [2024-11-20 15:23:44.458039] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:58.548 [2024-11-20 15:23:44.458189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60488 ] 00:11:58.806 [2024-11-20 15:23:44.628986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.806 [2024-11-20 15:23:44.757449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.806 [2024-11-20 15:23:44.757511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.806 [2024-11-20 15:23:44.757500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.781 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.781 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:59.781 15:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60512 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60512 /var/tmp/spdk2.sock 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60512 /var/tmp/spdk2.sock 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60512 /var/tmp/spdk2.sock 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60512 ']' 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:59.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.782 15:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:00.038 [2024-11-20 15:23:45.824873] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:00.038 [2024-11-20 15:23:45.825006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60512 ] 00:12:00.295 [2024-11-20 15:23:46.010288] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60488 has claimed it. 00:12:00.295 [2024-11-20 15:23:46.010380] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:00.552 ERROR: process (pid: 60512) is no longer running 00:12:00.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60512) - No such process 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60488 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60488 ']' 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60488 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60488 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.552 killing process with pid 60488 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60488' 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60488 00:12:00.552 15:23:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60488 00:12:03.834 00:12:03.834 real 0m4.906s 00:12:03.834 user 0m13.262s 00:12:03.834 sys 0m0.663s 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.834 ************************************ 00:12:03.834 END TEST locking_overlapped_coremask 00:12:03.834 ************************************ 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:03.834 15:23:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:03.834 15:23:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.834 15:23:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.834 15:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:03.834 ************************************ 00:12:03.834 START TEST locking_overlapped_coremask_via_rpc 00:12:03.834 ************************************ 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60576 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60576 /var/tmp/spdk.sock 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60576 ']' 00:12:03.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.834 15:23:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:03.835 [2024-11-20 15:23:49.441344] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:03.835 [2024-11-20 15:23:49.441522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:12:03.835 [2024-11-20 15:23:49.638504] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:03.835 [2024-11-20 15:23:49.638584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:03.835 [2024-11-20 15:23:49.781102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.835 [2024-11-20 15:23:49.781252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.835 [2024-11-20 15:23:49.781280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60599 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60599 /var/tmp/spdk2.sock 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60599 ']' 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.210 15:23:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.210 [2024-11-20 15:23:50.885347] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:05.210 [2024-11-20 15:23:50.886146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60599 ] 00:12:05.210 [2024-11-20 15:23:51.082187] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:05.210 [2024-11-20 15:23:51.082246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:05.468 [2024-11-20 15:23:51.347521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.468 [2024-11-20 15:23:51.347649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.468 [2024-11-20 15:23:51.347680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.996 [2024-11-20 15:23:53.663804] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60576 has claimed it. 00:12:07.996 request: 00:12:07.996 { 00:12:07.996 "method": "framework_enable_cpumask_locks", 00:12:07.996 "req_id": 1 00:12:07.996 } 00:12:07.996 Got JSON-RPC error response 00:12:07.996 response: 00:12:07.996 { 00:12:07.996 "code": -32603, 00:12:07.996 "message": "Failed to claim CPU core: 2" 00:12:07.996 } 00:12:07.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60576 /var/tmp/spdk.sock 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60576 ']' 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.996 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60599 /var/tmp/spdk2.sock 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60599 ']' 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:08.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.255 15:23:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:08.548 00:12:08.548 real 0m5.019s 00:12:08.548 user 0m1.804s 00:12:08.548 sys 0m0.297s 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.548 15:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.548 ************************************ 00:12:08.548 END TEST locking_overlapped_coremask_via_rpc 00:12:08.548 ************************************ 00:12:08.548 15:23:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:08.548 15:23:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60576 ]] 00:12:08.548 15:23:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60576 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60576 ']' 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60576 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60576 00:12:08.548 killing process with pid 60576 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60576' 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60576 00:12:08.548 15:23:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60576 00:12:11.112 15:23:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60599 ]] 00:12:11.112 15:23:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60599 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60599 ']' 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60599 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60599 00:12:11.112 killing process with pid 60599 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60599' 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60599 00:12:11.112 15:23:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60599 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60576 ]] 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60576 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60576 ']' 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60576 00:12:14.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60576) - No such process 00:12:14.394 Process with pid 60576 is not found 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60576 is not found' 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60599 ]] 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60599 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60599 ']' 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60599 00:12:14.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60599) - No such process 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60599 is not found' 00:12:14.394 Process with pid 60599 is not found 00:12:14.394 15:23:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.394 00:12:14.394 real 0m55.525s 00:12:14.394 user 1m36.035s 00:12:14.394 sys 0m7.720s 00:12:14.394 ************************************ 00:12:14.394 END TEST cpu_locks 00:12:14.394 ************************************ 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.394 15:23:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.394 ************************************ 00:12:14.394 END TEST event 00:12:14.394 ************************************ 00:12:14.394 00:12:14.394 real 1m27.694s 00:12:14.394 user 2m40.367s 00:12:14.394 sys 0m12.328s 00:12:14.394 15:23:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.394 15:23:59 event -- common/autotest_common.sh@10 -- # set +x 00:12:14.394 15:23:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:14.394 15:23:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.394 15:23:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.394 15:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:14.394 ************************************ 00:12:14.394 START TEST thread 00:12:14.394 ************************************ 00:12:14.394 15:23:59 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:14.394 * Looking for test storage... 00:12:14.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:14.394 15:23:59 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.394 15:23:59 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.394 15:23:59 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.394 15:24:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.394 15:24:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.394 15:24:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.394 15:24:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.394 15:24:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.394 15:24:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.394 15:24:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.394 15:24:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.394 15:24:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.394 15:24:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.394 15:24:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.394 15:24:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.394 15:24:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:14.394 15:24:00 thread -- scripts/common.sh@345 -- # : 1 00:12:14.394 15:24:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.394 15:24:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.394 15:24:00 thread -- scripts/common.sh@365 -- # decimal 1 00:12:14.394 15:24:00 thread -- scripts/common.sh@353 -- # local d=1 00:12:14.394 15:24:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.394 15:24:00 thread -- scripts/common.sh@355 -- # echo 1 00:12:14.394 15:24:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.394 15:24:00 thread -- scripts/common.sh@366 -- # decimal 2 00:12:14.394 15:24:00 thread -- scripts/common.sh@353 -- # local d=2 00:12:14.394 15:24:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.394 15:24:00 thread -- scripts/common.sh@355 -- # echo 2 00:12:14.394 15:24:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.394 15:24:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.394 15:24:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.394 15:24:00 thread -- scripts/common.sh@368 -- # return 0 00:12:14.394 15:24:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.394 15:24:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.394 --rc genhtml_branch_coverage=1 00:12:14.394 --rc genhtml_function_coverage=1 00:12:14.394 --rc genhtml_legend=1 00:12:14.394 --rc geninfo_all_blocks=1 00:12:14.394 --rc geninfo_unexecuted_blocks=1 00:12:14.394 00:12:14.394 ' 00:12:14.394 15:24:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.395 --rc genhtml_branch_coverage=1 00:12:14.395 --rc genhtml_function_coverage=1 00:12:14.395 --rc genhtml_legend=1 00:12:14.395 --rc geninfo_all_blocks=1 00:12:14.395 --rc geninfo_unexecuted_blocks=1 00:12:14.395 00:12:14.395 ' 00:12:14.395 15:24:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.395 --rc genhtml_branch_coverage=1 00:12:14.395 --rc genhtml_function_coverage=1 00:12:14.395 --rc genhtml_legend=1 00:12:14.395 --rc geninfo_all_blocks=1 00:12:14.395 --rc geninfo_unexecuted_blocks=1 00:12:14.395 00:12:14.395 ' 00:12:14.395 15:24:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.395 --rc genhtml_branch_coverage=1 00:12:14.395 --rc genhtml_function_coverage=1 00:12:14.395 --rc genhtml_legend=1 00:12:14.395 --rc geninfo_all_blocks=1 00:12:14.395 --rc geninfo_unexecuted_blocks=1 00:12:14.395 00:12:14.395 ' 00:12:14.395 15:24:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:14.395 15:24:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:14.395 15:24:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.395 15:24:00 thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 ************************************ 00:12:14.395 START TEST thread_poller_perf 00:12:14.395 ************************************ 00:12:14.395 15:24:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:14.395 [2024-11-20 15:24:00.157941] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:14.395 [2024-11-20 15:24:00.158765] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:12:14.653 [2024-11-20 15:24:00.359435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.653 [2024-11-20 15:24:00.489387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.653 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:16.029 [2024-11-20T15:24:01.987Z] ====================================== 00:12:16.029 [2024-11-20T15:24:01.987Z] busy:2111442444 (cyc) 00:12:16.029 [2024-11-20T15:24:01.987Z] total_run_count: 383000 00:12:16.029 [2024-11-20T15:24:01.987Z] tsc_hz: 2100000000 (cyc) 00:12:16.029 [2024-11-20T15:24:01.987Z] ====================================== 00:12:16.029 [2024-11-20T15:24:01.987Z] poller_cost: 5512 (cyc), 2624 (nsec) 00:12:16.029 00:12:16.029 real 0m1.667s 00:12:16.029 user 0m1.440s 00:12:16.029 sys 0m0.115s 00:12:16.029 15:24:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.029 ************************************ 00:12:16.029 END TEST thread_poller_perf 00:12:16.029 ************************************ 00:12:16.029 15:24:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 15:24:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:16.029 15:24:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:16.029 15:24:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.029 15:24:01 thread -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 ************************************ 00:12:16.029 START TEST thread_poller_perf 00:12:16.029 ************************************ 00:12:16.029 15:24:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:16.029 [2024-11-20 15:24:01.889134] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:16.029 [2024-11-20 15:24:01.889464] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:12:16.287 [2024-11-20 15:24:02.087035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.287 [2024-11-20 15:24:02.226800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.287 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:17.692 [2024-11-20T15:24:03.650Z] ====================================== 00:12:17.692 [2024-11-20T15:24:03.650Z] busy:2103807446 (cyc) 00:12:17.692 [2024-11-20T15:24:03.650Z] total_run_count: 4181000 00:12:17.692 [2024-11-20T15:24:03.650Z] tsc_hz: 2100000000 (cyc) 00:12:17.692 [2024-11-20T15:24:03.650Z] ====================================== 00:12:17.692 [2024-11-20T15:24:03.650Z] poller_cost: 503 (cyc), 239 (nsec) 00:12:17.692 00:12:17.692 real 0m1.668s 00:12:17.692 user 0m1.452s 00:12:17.692 sys 0m0.104s 00:12:17.692 15:24:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.692 15:24:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:17.692 ************************************ 00:12:17.692 END TEST thread_poller_perf 00:12:17.692 ************************************ 00:12:17.692 15:24:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:17.692 00:12:17.692 real 0m3.671s 00:12:17.692 user 0m3.052s 00:12:17.692 sys 0m0.397s 00:12:17.692 15:24:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.692 ************************************ 00:12:17.692 END TEST thread 00:12:17.692 ************************************ 00:12:17.692 15:24:03 thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.692 15:24:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:17.692 15:24:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:17.692 15:24:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.692 15:24:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.692 15:24:03 -- common/autotest_common.sh@10 -- # set +x 00:12:17.692 ************************************ 00:12:17.692 START TEST app_cmdline 00:12:17.692 ************************************ 00:12:17.692 15:24:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:17.952 * Looking for test storage... 00:12:17.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:17.952 15:24:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.952 15:24:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.952 15:24:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.952 15:24:03 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:17.952 15:24:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.953 15:24:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.953 --rc genhtml_branch_coverage=1 00:12:17.953 --rc genhtml_function_coverage=1 00:12:17.953 --rc genhtml_legend=1 00:12:17.953 --rc geninfo_all_blocks=1 00:12:17.953 --rc geninfo_unexecuted_blocks=1 00:12:17.953 00:12:17.953 ' 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.953 --rc genhtml_branch_coverage=1 00:12:17.953 --rc genhtml_function_coverage=1 00:12:17.953 --rc genhtml_legend=1 00:12:17.953 --rc geninfo_all_blocks=1 00:12:17.953 --rc geninfo_unexecuted_blocks=1 00:12:17.953 00:12:17.953 ' 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.953 --rc genhtml_branch_coverage=1 00:12:17.953 --rc genhtml_function_coverage=1 00:12:17.953 --rc genhtml_legend=1 00:12:17.953 --rc geninfo_all_blocks=1 00:12:17.953 --rc geninfo_unexecuted_blocks=1 00:12:17.953 00:12:17.953 ' 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.953 --rc genhtml_branch_coverage=1 00:12:17.953 --rc genhtml_function_coverage=1 00:12:17.953 --rc genhtml_legend=1 00:12:17.953 --rc geninfo_all_blocks=1 00:12:17.953 --rc geninfo_unexecuted_blocks=1 00:12:17.953 00:12:17.953 ' 00:12:17.953 15:24:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:17.953 15:24:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60931 00:12:17.953 15:24:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60931 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60931 ']' 00:12:17.953 15:24:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.953 15:24:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:18.213 [2024-11-20 15:24:03.967847] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:18.213 [2024-11-20 15:24:03.968300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:12:18.213 [2024-11-20 15:24:04.162413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.472 [2024-11-20 15:24:04.291806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.421 15:24:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.421 15:24:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:19.421 15:24:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:19.680 { 00:12:19.680 "version": "SPDK v25.01-pre git sha1 7bc1aace1", 00:12:19.680 "fields": { 00:12:19.680 "major": 25, 00:12:19.680 "minor": 1, 00:12:19.680 "patch": 0, 00:12:19.680 "suffix": "-pre", 00:12:19.680 "commit": "7bc1aace1" 00:12:19.680 } 00:12:19.680 } 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:19.680 15:24:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:19.680 15:24:05 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.939 request: 00:12:19.939 { 00:12:19.939 "method": "env_dpdk_get_mem_stats", 00:12:19.939 "req_id": 1 00:12:19.939 } 00:12:19.939 Got JSON-RPC error response 00:12:19.939 response: 00:12:19.939 { 00:12:19.939 "code": -32601, 00:12:19.939 "message": "Method not found" 00:12:19.939 } 00:12:19.939 15:24:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:19.939 15:24:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.939 15:24:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.939 15:24:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.939 15:24:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60931 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60931 ']' 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60931 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60931 00:12:19.940 killing process with pid 60931 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60931' 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 60931 00:12:19.940 15:24:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 60931 00:12:22.529 ************************************ 00:12:22.529 END TEST app_cmdline 00:12:22.529 ************************************ 00:12:22.529 00:12:22.529 real 0m4.655s 00:12:22.529 user 0m5.121s 00:12:22.529 sys 0m0.708s 00:12:22.529 15:24:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.529 15:24:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:22.529 15:24:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.529 15:24:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.529 15:24:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.529 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:12:22.529 ************************************ 00:12:22.529 START TEST version 00:12:22.529 ************************************ 00:12:22.529 15:24:08 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.529 * Looking for test storage... 00:12:22.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:22.529 15:24:08 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.529 15:24:08 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.529 15:24:08 version -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.788 15:24:08 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.788 15:24:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.788 15:24:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.788 15:24:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.788 15:24:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.788 15:24:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.788 15:24:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.788 15:24:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.788 15:24:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.788 15:24:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.788 15:24:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.788 15:24:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.788 15:24:08 version -- scripts/common.sh@344 -- # case "$op" in 00:12:22.788 15:24:08 version -- scripts/common.sh@345 -- # : 1 00:12:22.788 15:24:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.788 15:24:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.788 15:24:08 version -- scripts/common.sh@365 -- # decimal 1 00:12:22.788 15:24:08 version -- scripts/common.sh@353 -- # local d=1 00:12:22.788 15:24:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.788 15:24:08 version -- scripts/common.sh@355 -- # echo 1 00:12:22.788 15:24:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.788 15:24:08 version -- scripts/common.sh@366 -- # decimal 2 00:12:22.788 15:24:08 version -- scripts/common.sh@353 -- # local d=2 00:12:22.788 15:24:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.788 15:24:08 version -- scripts/common.sh@355 -- # echo 2 00:12:22.788 15:24:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.789 15:24:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.789 15:24:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.789 15:24:08 version -- scripts/common.sh@368 -- # return 0 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.789 --rc genhtml_branch_coverage=1 00:12:22.789 --rc genhtml_function_coverage=1 00:12:22.789 --rc genhtml_legend=1 00:12:22.789 --rc geninfo_all_blocks=1 00:12:22.789 --rc geninfo_unexecuted_blocks=1 00:12:22.789 00:12:22.789 ' 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.789 --rc genhtml_branch_coverage=1 00:12:22.789 --rc genhtml_function_coverage=1 00:12:22.789 --rc genhtml_legend=1 00:12:22.789 --rc geninfo_all_blocks=1 00:12:22.789 --rc geninfo_unexecuted_blocks=1 00:12:22.789 00:12:22.789 ' 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.789 --rc genhtml_branch_coverage=1 00:12:22.789 --rc genhtml_function_coverage=1 00:12:22.789 --rc genhtml_legend=1 00:12:22.789 --rc geninfo_all_blocks=1 00:12:22.789 --rc geninfo_unexecuted_blocks=1 00:12:22.789 00:12:22.789 ' 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.789 --rc genhtml_branch_coverage=1 00:12:22.789 --rc genhtml_function_coverage=1 00:12:22.789 --rc genhtml_legend=1 00:12:22.789 --rc geninfo_all_blocks=1 00:12:22.789 --rc geninfo_unexecuted_blocks=1 00:12:22.789 00:12:22.789 ' 00:12:22.789 15:24:08 version -- app/version.sh@17 -- # get_header_version major 00:12:22.789 15:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # cut -f2 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.789 15:24:08 version -- app/version.sh@17 -- # major=25 00:12:22.789 15:24:08 version -- app/version.sh@18 -- # get_header_version minor 00:12:22.789 15:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # cut -f2 00:12:22.789 15:24:08 version -- app/version.sh@18 -- # minor=1 00:12:22.789 15:24:08 version -- app/version.sh@19 -- # get_header_version patch 00:12:22.789 15:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # cut -f2 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.789 15:24:08 version -- app/version.sh@19 -- # patch=0 00:12:22.789 15:24:08 version -- app/version.sh@20 -- # get_header_version suffix 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # cut -f2 00:12:22.789 15:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.789 15:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.789 15:24:08 version -- app/version.sh@20 -- # suffix=-pre 00:12:22.789 15:24:08 version -- app/version.sh@22 -- # version=25.1 00:12:22.789 15:24:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:22.789 15:24:08 version -- app/version.sh@28 -- # version=25.1rc0 00:12:22.789 15:24:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:22.789 15:24:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:22.789 15:24:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:22.789 15:24:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:22.789 00:12:22.789 real 0m0.278s 00:12:22.789 user 0m0.170s 00:12:22.789 sys 0m0.150s 00:12:22.789 ************************************ 00:12:22.789 END TEST version 00:12:22.789 ************************************ 00:12:22.789 15:24:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.789 15:24:08 version -- common/autotest_common.sh@10 -- # set +x 00:12:22.789 15:24:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:22.789 15:24:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:22.789 15:24:08 -- spdk/autotest.sh@194 -- # uname -s 00:12:22.789 15:24:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:22.789 15:24:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:22.789 15:24:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:22.789 15:24:08 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:22.789 15:24:08 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:22.789 15:24:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.789 15:24:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.789 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:12:22.789 ************************************ 00:12:22.789 START TEST blockdev_nvme 00:12:22.789 ************************************ 00:12:22.789 15:24:08 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:23.048 * Looking for test storage... 00:12:23.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:23.048 15:24:08 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.048 15:24:08 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.048 15:24:08 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.048 15:24:08 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.048 15:24:08 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:23.048 15:24:08 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.049 --rc genhtml_branch_coverage=1 00:12:23.049 --rc genhtml_function_coverage=1 00:12:23.049 --rc genhtml_legend=1 00:12:23.049 --rc geninfo_all_blocks=1 00:12:23.049 --rc geninfo_unexecuted_blocks=1 00:12:23.049 00:12:23.049 ' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.049 --rc genhtml_branch_coverage=1 00:12:23.049 --rc genhtml_function_coverage=1 00:12:23.049 --rc genhtml_legend=1 00:12:23.049 --rc geninfo_all_blocks=1 00:12:23.049 --rc geninfo_unexecuted_blocks=1 00:12:23.049 00:12:23.049 ' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.049 --rc genhtml_branch_coverage=1 00:12:23.049 --rc genhtml_function_coverage=1 00:12:23.049 --rc genhtml_legend=1 00:12:23.049 --rc geninfo_all_blocks=1 00:12:23.049 --rc geninfo_unexecuted_blocks=1 00:12:23.049 00:12:23.049 ' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.049 --rc genhtml_branch_coverage=1 00:12:23.049 --rc genhtml_function_coverage=1 00:12:23.049 --rc genhtml_legend=1 00:12:23.049 --rc geninfo_all_blocks=1 00:12:23.049 --rc geninfo_unexecuted_blocks=1 00:12:23.049 00:12:23.049 ' 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:23.049 15:24:08 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61125 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61125 00:12:23.049 15:24:08 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61125 ']' 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.049 15:24:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.049 [2024-11-20 15:24:08.968934] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:23.049 [2024-11-20 15:24:08.969334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61125 ] 00:12:23.308 [2024-11-20 15:24:09.142376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.308 [2024-11-20 15:24:09.261551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.242 15:24:10 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.242 15:24:10 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:12:24.242 15:24:10 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:24.242 15:24:10 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:12:24.242 15:24:10 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:24.242 15:24:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:24.242 15:24:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:24.501 15:24:10 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:24.501 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.501 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:24.759 15:24:10 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.759 15:24:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.019 15:24:10 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.019 15:24:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:25.019 15:24:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:25.020 15:24:10 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ed8adb82-6444-4668-abbc-6a8c7c95c5b2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ed8adb82-6444-4668-abbc-6a8c7c95c5b2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5fc79e87-af48-47c8-9156-289d9d776f6c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5fc79e87-af48-47c8-9156-289d9d776f6c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "898a500e-8401-4387-8502-24537b0e6772"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "898a500e-8401-4387-8502-24537b0e6772",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1cb5ea32-e09e-42e0-83f2-51bb1f7cabb2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cb5ea32-e09e-42e0-83f2-51bb1f7cabb2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a3f1b24f-6527-4327-8604-3ffa22c9a25a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3f1b24f-6527-4327-8604-3ffa22c9a25a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "52a88987-aa3f-4e2c-9e62-d79522c8a045"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "52a88987-aa3f-4e2c-9e62-d79522c8a045",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:25.020 15:24:10 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:25.020 15:24:10 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:25.020 15:24:10 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:25.020 15:24:10 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61125 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61125 ']' 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61125 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61125 00:12:25.020 killing process with pid 61125 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61125' 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61125 00:12:25.020 15:24:10 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61125 00:12:27.607 15:24:13 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:27.607 15:24:13 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:27.607 15:24:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:27.607 15:24:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.607 15:24:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.607 ************************************ 00:12:27.607 START TEST bdev_hello_world 00:12:27.607 ************************************ 00:12:27.607 15:24:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:27.607 [2024-11-20 15:24:13.414471] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:27.607 [2024-11-20 15:24:13.414680] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61220 ] 00:12:27.866 [2024-11-20 15:24:13.604767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.866 [2024-11-20 15:24:13.726294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.803 [2024-11-20 15:24:14.396483] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:28.803 [2024-11-20 15:24:14.396545] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:28.803 [2024-11-20 15:24:14.396594] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:28.803 [2024-11-20 15:24:14.399580] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:28.803 [2024-11-20 15:24:14.400137] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:28.803 [2024-11-20 15:24:14.400173] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:28.803 [2024-11-20 15:24:14.400393] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:28.803 00:12:28.803 [2024-11-20 15:24:14.400419] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:29.737 00:12:29.737 real 0m2.272s 00:12:29.737 user 0m1.895s 00:12:29.737 sys 0m0.267s 00:12:29.737 15:24:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.737 15:24:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:29.737 ************************************ 00:12:29.737 END TEST bdev_hello_world 00:12:29.737 ************************************ 00:12:29.737 15:24:15 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:29.737 15:24:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.737 15:24:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.737 15:24:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.737 ************************************ 00:12:29.737 START TEST bdev_bounds 00:12:29.737 ************************************ 00:12:29.737 Process bdevio pid: 61268 00:12:29.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61268 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61268' 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61268 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61268 ']' 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.737 15:24:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:29.996 [2024-11-20 15:24:15.711181] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:29.996 [2024-11-20 15:24:15.711321] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61268 ] 00:12:29.996 [2024-11-20 15:24:15.883165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.255 [2024-11-20 15:24:16.007499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.255 [2024-11-20 15:24:16.007633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.255 [2024-11-20 15:24:16.007646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.822 15:24:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.822 15:24:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:30.822 15:24:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:31.080 I/O targets: 00:12:31.080 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:31.080 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:31.080 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.080 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.080 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.080 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:31.080 00:12:31.080 00:12:31.080 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.080 http://cunit.sourceforge.net/ 00:12:31.080 00:12:31.080 00:12:31.080 Suite: bdevio tests on: Nvme3n1 00:12:31.080 Test: blockdev write read block ...passed 00:12:31.080 Test: blockdev write zeroes read block ...passed 00:12:31.080 Test: blockdev write zeroes read no split ...passed 00:12:31.080 Test: blockdev write zeroes read split ...passed 00:12:31.080 Test: blockdev write zeroes read split partial ...passed 00:12:31.080 Test: blockdev reset ...[2024-11-20 15:24:16.877365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:31.080 [2024-11-20 15:24:16.881796] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:31.080 passed 00:12:31.080 Test: blockdev write read 8 blocks ...passed 00:12:31.080 Test: blockdev write read size > 128k ...passed 00:12:31.080 Test: blockdev write read invalid size ...passed 00:12:31.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.080 Test: blockdev write read max offset ...passed 00:12:31.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.080 Test: blockdev writev readv 8 blocks ...passed 00:12:31.080 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.080 Test: blockdev writev readv block ...passed 00:12:31.081 Test: blockdev writev readv size > 128k ...passed 00:12:31.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.081 Test: blockdev comparev and writev ...[2024-11-20 15:24:16.891070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:12:31.081 Test: blockdev nvme passthru rw ...passed 00:12:31.081 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2bc80a000 len:0x1000 00:12:31.081 [2024-11-20 15:24:16.891276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:31.081 passed 00:12:31.081 Test: blockdev nvme admin passthru ...[2024-11-20 15:24:16.891890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:31.081 [2024-11-20 15:24:16.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:31.081 passed 00:12:31.081 Test: blockdev copy ...passed 00:12:31.081 Suite: bdevio tests on: Nvme2n3 00:12:31.081 Test: blockdev write read block ...passed 00:12:31.081 Test: blockdev write zeroes read block ...passed 00:12:31.081 Test: blockdev write zeroes read no split ...passed 00:12:31.081 Test: blockdev write zeroes read split ...passed 00:12:31.081 Test: blockdev write zeroes read split partial ...passed 00:12:31.081 Test: blockdev reset ...[2024-11-20 15:24:16.969568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:31.081 passed 00:12:31.081 Test: blockdev write read 8 blocks ...[2024-11-20 15:24:16.974400] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:31.081 passed 00:12:31.081 Test: blockdev write read size > 128k ...passed 00:12:31.081 Test: blockdev write read invalid size ...passed 00:12:31.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.081 Test: blockdev write read max offset ...passed 00:12:31.081 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.081 Test: blockdev writev readv 8 blocks ...passed 00:12:31.081 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.081 Test: blockdev writev readv block ...passed 00:12:31.081 Test: blockdev writev readv size > 128k ...passed 00:12:31.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.081 Test: blockdev comparev and writev ...[2024-11-20 15:24:16.982808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:12:31.081 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29be06000 len:0x1000 00:12:31.081 [2024-11-20 15:24:16.982983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:31.081 passed 00:12:31.081 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.081 Test: blockdev nvme admin passthru ...[2024-11-20 15:24:16.983776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:31.081 [2024-11-20 15:24:16.983815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:31.081 passed 00:12:31.081 Test: blockdev copy ...passed 00:12:31.081 Suite: bdevio tests on: Nvme2n2 00:12:31.081 Test: blockdev write read block ...passed 00:12:31.081 Test: blockdev write zeroes read block ...passed 00:12:31.081 Test: blockdev write zeroes read no split ...passed 00:12:31.081 Test: blockdev write zeroes read split ...passed 00:12:31.340 Test: blockdev write zeroes read split partial ...passed 00:12:31.340 Test: blockdev reset ...[2024-11-20 15:24:17.084216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:31.340 [2024-11-20 15:24:17.088790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:31.340 passed 00:12:31.340 Test: blockdev write read 8 blocks ...passed 00:12:31.340 Test: blockdev write read size > 128k ...passed 00:12:31.340 Test: blockdev write read invalid size ...passed 00:12:31.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.340 Test: blockdev write read max offset ...passed 00:12:31.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.340 Test: blockdev writev readv 8 blocks ...passed 00:12:31.340 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.340 Test: blockdev writev readv block ...passed 00:12:31.340 Test: blockdev writev readv size > 128k ...passed 00:12:31.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.340 Test: blockdev comparev and writev ...[2024-11-20 15:24:17.100297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc83c000 len:0x1000 00:12:31.340 [2024-11-20 15:24:17.100488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:31.340 passed 00:12:31.340 Test: blockdev nvme passthru rw ...passed 00:12:31.340 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.340 Test: blockdev nvme admin passthru ...[2024-11-20 15:24:17.101721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:31.340 [2024-11-20 15:24:17.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:31.340 passed 00:12:31.340 Test: blockdev copy ...passed 00:12:31.340 Suite: bdevio tests on: Nvme2n1 00:12:31.340 Test: blockdev write read block ...passed 00:12:31.340 Test: blockdev write zeroes read block ...passed 00:12:31.340 Test: blockdev write zeroes read no split ...passed 00:12:31.340 Test: blockdev write zeroes read split ...passed 00:12:31.340 Test: blockdev write zeroes read split partial ...passed 00:12:31.340 Test: blockdev reset ...[2024-11-20 15:24:17.200619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:31.340 passed 00:12:31.340 Test: blockdev write read 8 blocks ...[2024-11-20 15:24:17.205088] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:31.340 passed 00:12:31.340 Test: blockdev write read size > 128k ...passed 00:12:31.340 Test: blockdev write read invalid size ...passed 00:12:31.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.340 Test: blockdev write read max offset ...passed 00:12:31.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.340 Test: blockdev writev readv 8 blocks ...passed 00:12:31.340 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.340 Test: blockdev writev readv block ...passed 00:12:31.340 Test: blockdev writev readv size > 128k ...passed 00:12:31.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.340 Test: blockdev comparev and writev ...[2024-11-20 15:24:17.214104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc838000 len:0x1000 00:12:31.340 [2024-11-20 15:24:17.214159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:31.340 passed 00:12:31.340 Test: blockdev nvme passthru rw ...passed 00:12:31.340 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:24:17.214773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:12:31.340 Test: blockdev nvme admin passthru ...RP2 0x0 00:12:31.340 [2024-11-20 15:24:17.215015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:31.340 passed 00:12:31.340 Test: blockdev copy ...passed 00:12:31.340 Suite: bdevio tests on: Nvme1n1 00:12:31.340 Test: blockdev write read block ...passed 00:12:31.340 Test: blockdev write zeroes read block ...passed 00:12:31.340 Test: blockdev write zeroes read no split ...passed 00:12:31.340 Test: blockdev write zeroes read split ...passed 00:12:31.340 Test: blockdev write zeroes read split partial ...passed 00:12:31.340 Test: blockdev reset ...[2024-11-20 15:24:17.292553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:31.600 [2024-11-20 15:24:17.296978] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:31.600 passed 00:12:31.600 Test: blockdev write read 8 blocks ...passed 00:12:31.600 Test: blockdev write read size > 128k ...passed 00:12:31.600 Test: blockdev write read invalid size ...passed 00:12:31.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.600 Test: blockdev write read max offset ...passed 00:12:31.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.600 Test: blockdev writev readv 8 blocks ...passed 00:12:31.600 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.600 Test: blockdev writev readv block ...passed 00:12:31.600 Test: blockdev writev readv size > 128k ...passed 00:12:31.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.600 Test: blockdev comparev and writev ...[2024-11-20 15:24:17.305656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:12:31.600 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cc834000 len:0x1000 00:12:31.600 [2024-11-20 15:24:17.305824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:31.600 passed 00:12:31.600 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.600 Test: blockdev nvme admin passthru ...[2024-11-20 15:24:17.306558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:31.600 [2024-11-20 15:24:17.306618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:31.600 passed 00:12:31.600 Test: blockdev copy ...passed 00:12:31.600 Suite: bdevio tests on: Nvme0n1 00:12:31.600 Test: blockdev write read block ...passed 00:12:31.600 Test: blockdev write zeroes read block ...passed 00:12:31.600 Test: blockdev write zeroes read no split ...passed 00:12:31.600 Test: blockdev write zeroes read split ...passed 00:12:31.600 Test: blockdev write zeroes read split partial ...passed 00:12:31.600 Test: blockdev reset ...[2024-11-20 15:24:17.384073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:31.600 passed 00:12:31.600 Test: blockdev write read 8 blocks ...[2024-11-20 15:24:17.387907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:31.600 passed 00:12:31.600 Test: blockdev write read size > 128k ...passed 00:12:31.600 Test: blockdev write read invalid size ...passed 00:12:31.600 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.600 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.600 Test: blockdev write read max offset ...passed 00:12:31.600 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.600 Test: blockdev writev readv 8 blocks ...passed 00:12:31.600 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.600 Test: blockdev writev readv block ...passed 00:12:31.600 Test: blockdev writev readv size > 128k ...passed 00:12:31.600 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.600 Test: blockdev comparev and writev ...passed 00:12:31.600 Test: blockdev nvme passthru rw ...[2024-11-20 15:24:17.395705] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:31.600 separate metadata which is not supported yet. 00:12:31.600 passed 00:12:31.600 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:24:17.396291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:12:31.600 Test: blockdev nvme admin passthru ...RP2 0x0 00:12:31.600 [2024-11-20 15:24:17.396453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:31.600 passed 00:12:31.600 Test: blockdev copy ...passed 00:12:31.600 00:12:31.600 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.600 suites 6 6 n/a 0 0 00:12:31.600 tests 138 138 138 0 0 00:12:31.600 asserts 893 893 893 0 n/a 00:12:31.600 00:12:31.600 Elapsed time = 1.643 seconds 00:12:31.600 0 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61268 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61268 ']' 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61268 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61268 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61268' 00:12:31.600 killing process with pid 61268 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61268 00:12:31.600 15:24:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61268 00:12:32.976 15:24:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:32.976 00:12:32.976 real 0m2.989s 00:12:32.976 user 0m7.716s 00:12:32.976 sys 0m0.433s 00:12:32.976 15:24:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.976 15:24:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:32.976 ************************************ 00:12:32.976 END TEST bdev_bounds 00:12:32.976 ************************************ 00:12:32.976 15:24:18 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:32.976 15:24:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.976 15:24:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.976 15:24:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.976 ************************************ 00:12:32.976 START TEST bdev_nbd 00:12:32.976 ************************************ 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.976 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61333 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61333 /var/tmp/spdk-nbd.sock 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61333 ']' 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:32.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.977 15:24:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:32.977 [2024-11-20 15:24:18.782598] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:32.977 [2024-11-20 15:24:18.782953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.235 [2024-11-20 15:24:18.970431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.235 [2024-11-20 15:24:19.107354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:34.168 15:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:34.168 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.169 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.490 1+0 records in 00:12:34.490 1+0 records out 00:12:34.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568932 s, 7.2 MB/s 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:34.490 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.748 1+0 records in 00:12:34.748 1+0 records out 00:12:34.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430468 s, 9.5 MB/s 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:34.748 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.007 1+0 records in 00:12:35.007 1+0 records out 00:12:35.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637837 s, 6.4 MB/s 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:35.007 15:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:35.265 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.266 1+0 records in 00:12:35.266 1+0 records out 00:12:35.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081707 s, 5.0 MB/s 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:35.266 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.525 1+0 records in 00:12:35.525 1+0 records out 00:12:35.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513246 s, 8.0 MB/s 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:35.525 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.092 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.093 1+0 records in 00:12:36.093 1+0 records out 00:12:36.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688201 s, 6.0 MB/s 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.093 15:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.360 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd0", 00:12:36.360 "bdev_name": "Nvme0n1" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd1", 00:12:36.360 "bdev_name": "Nvme1n1" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd2", 00:12:36.360 "bdev_name": "Nvme2n1" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd3", 00:12:36.360 "bdev_name": "Nvme2n2" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd4", 00:12:36.360 "bdev_name": "Nvme2n3" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd5", 00:12:36.360 "bdev_name": "Nvme3n1" 00:12:36.360 } 00:12:36.360 ]' 00:12:36.360 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:36.360 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd0", 00:12:36.360 "bdev_name": "Nvme0n1" 00:12:36.360 }, 00:12:36.360 { 00:12:36.360 "nbd_device": "/dev/nbd1", 00:12:36.360 "bdev_name": "Nvme1n1" 00:12:36.360 }, 00:12:36.360 { 00:12:36.361 "nbd_device": "/dev/nbd2", 00:12:36.361 "bdev_name": "Nvme2n1" 00:12:36.361 }, 00:12:36.361 { 00:12:36.361 "nbd_device": "/dev/nbd3", 00:12:36.361 "bdev_name": "Nvme2n2" 00:12:36.361 }, 00:12:36.361 { 00:12:36.361 "nbd_device": "/dev/nbd4", 00:12:36.361 "bdev_name": "Nvme2n3" 00:12:36.361 }, 00:12:36.361 { 00:12:36.361 "nbd_device": "/dev/nbd5", 00:12:36.361 "bdev_name": "Nvme3n1" 00:12:36.361 } 00:12:36.361 ]' 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.361 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.639 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.910 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.170 15:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.429 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.688 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:37.946 15:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:38.204 /dev/nbd0 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.204 1+0 records in 00:12:38.204 1+0 records out 00:12:38.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060277 s, 6.8 MB/s 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:38.204 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:38.462 /dev/nbd1 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.462 1+0 records in 00:12:38.462 1+0 records out 00:12:38.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562944 s, 7.3 MB/s 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:38.462 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:38.721 /dev/nbd10 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.721 1+0 records in 00:12:38.721 1+0 records out 00:12:38.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061968 s, 6.6 MB/s 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:38.721 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:38.979 /dev/nbd11 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.979 1+0 records in 00:12:38.979 1+0 records out 00:12:38.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723825 s, 5.7 MB/s 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:38.979 15:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:39.546 /dev/nbd12 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.546 1+0 records in 00:12:39.546 1+0 records out 00:12:39.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793224 s, 5.2 MB/s 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:39.546 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:39.805 /dev/nbd13 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.805 1+0 records in 00:12:39.805 1+0 records out 00:12:39.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737344 s, 5.6 MB/s 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.805 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd0", 00:12:40.064 "bdev_name": "Nvme0n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd1", 00:12:40.064 "bdev_name": "Nvme1n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd10", 00:12:40.064 "bdev_name": "Nvme2n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd11", 00:12:40.064 "bdev_name": "Nvme2n2" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd12", 00:12:40.064 "bdev_name": "Nvme2n3" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd13", 00:12:40.064 "bdev_name": "Nvme3n1" 00:12:40.064 } 00:12:40.064 ]' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd0", 00:12:40.064 "bdev_name": "Nvme0n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd1", 00:12:40.064 "bdev_name": "Nvme1n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd10", 00:12:40.064 "bdev_name": "Nvme2n1" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd11", 00:12:40.064 "bdev_name": "Nvme2n2" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd12", 00:12:40.064 "bdev_name": "Nvme2n3" 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "nbd_device": "/dev/nbd13", 00:12:40.064 "bdev_name": "Nvme3n1" 00:12:40.064 } 00:12:40.064 ]' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:40.064 /dev/nbd1 00:12:40.064 /dev/nbd10 00:12:40.064 /dev/nbd11 00:12:40.064 /dev/nbd12 00:12:40.064 /dev/nbd13' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:40.064 /dev/nbd1 00:12:40.064 /dev/nbd10 00:12:40.064 /dev/nbd11 00:12:40.064 /dev/nbd12 00:12:40.064 /dev/nbd13' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:40.064 256+0 records in 00:12:40.064 256+0 records out 00:12:40.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00960752 s, 109 MB/s 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.064 15:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:40.323 256+0 records in 00:12:40.323 256+0 records out 00:12:40.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133823 s, 7.8 MB/s 00:12:40.323 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.323 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:40.323 256+0 records in 00:12:40.323 256+0 records out 00:12:40.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175786 s, 6.0 MB/s 00:12:40.323 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.323 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:40.582 256+0 records in 00:12:40.582 256+0 records out 00:12:40.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137751 s, 7.6 MB/s 00:12:40.582 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.582 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:40.582 256+0 records in 00:12:40.582 256+0 records out 00:12:40.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133777 s, 7.8 MB/s 00:12:40.582 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.582 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:40.842 256+0 records in 00:12:40.842 256+0 records out 00:12:40.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133918 s, 7.8 MB/s 00:12:40.842 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.842 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:41.102 256+0 records in 00:12:41.102 256+0 records out 00:12:41.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133578 s, 7.8 MB/s 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.102 15:24:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.362 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.621 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:41.882 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.141 15:24:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.400 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.658 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.659 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.659 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:42.916 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:43.174 malloc_lvol_verify 00:12:43.174 15:24:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:43.174 034af678-5fb8-4deb-b3cc-13185fd4ca4f 00:12:43.174 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:43.430 b4796e0e-bc36-4545-8aba-ca532da01987 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:43.688 /dev/nbd0 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:43.688 mke2fs 1.47.0 (5-Feb-2023) 00:12:43.688 Discarding device blocks: 0/4096 done 00:12:43.688 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:43.688 00:12:43.688 Allocating group tables: 0/1 done 00:12:43.688 Writing inode tables: 0/1 done 00:12:43.688 Creating journal (1024 blocks): done 00:12:43.688 Writing superblocks and filesystem accounting information: 0/1 done 00:12:43.688 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.688 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61333 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61333 ']' 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61333 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.946 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61333 00:12:44.204 killing process with pid 61333 00:12:44.204 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.204 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.204 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61333' 00:12:44.204 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61333 00:12:44.204 15:24:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61333 00:12:45.579 ************************************ 00:12:45.579 END TEST bdev_nbd 00:12:45.579 ************************************ 00:12:45.579 15:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:45.579 00:12:45.579 real 0m12.485s 00:12:45.579 user 0m16.601s 00:12:45.579 sys 0m5.034s 00:12:45.579 15:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.579 15:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:45.579 15:24:31 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:45.579 15:24:31 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:12:45.579 skipping fio tests on NVMe due to multi-ns failures. 00:12:45.579 15:24:31 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:45.579 15:24:31 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:45.579 15:24:31 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.579 15:24:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:45.579 15:24:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.579 15:24:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.579 ************************************ 00:12:45.579 START TEST bdev_verify 00:12:45.579 ************************************ 00:12:45.579 15:24:31 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.579 [2024-11-20 15:24:31.346808] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:45.579 [2024-11-20 15:24:31.347879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61735 ] 00:12:45.837 [2024-11-20 15:24:31.545951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.837 [2024-11-20 15:24:31.667478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.837 [2024-11-20 15:24:31.667507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.774 Running I/O for 5 seconds... 00:12:48.648 18688.00 IOPS, 73.00 MiB/s [2024-11-20T15:24:36.067Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-20T15:24:37.002Z] 18346.67 IOPS, 71.67 MiB/s [2024-11-20T15:24:37.570Z] 18240.00 IOPS, 71.25 MiB/s 00:12:51.612 Latency(us) 00:12:51.612 [2024-11-20T15:24:37.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.612 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0xbd0bd 00:12:51.612 Nvme0n1 : 5.09 1459.77 5.70 0.00 0.00 87497.34 11734.06 76396.25 00:12:51.612 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:51.612 Nvme0n1 : 5.06 1543.33 6.03 0.00 0.00 82736.16 15166.90 77394.90 00:12:51.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0xa0000 00:12:51.612 Nvme1n1 : 5.09 1459.25 5.70 0.00 0.00 87375.04 12108.56 74398.96 00:12:51.612 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0xa0000 length 0xa0000 00:12:51.612 Nvme1n1 : 5.06 1542.86 6.03 0.00 0.00 82610.38 15166.90 71902.35 00:12:51.612 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0x80000 00:12:51.612 Nvme2n1 : 5.09 1458.73 5.70 0.00 0.00 87095.63 12545.46 70404.39 00:12:51.612 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x80000 length 0x80000 00:12:51.612 Nvme2n1 : 5.06 1542.38 6.02 0.00 0.00 82395.83 15291.73 69905.07 00:12:51.612 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0x80000 00:12:51.612 Nvme2n2 : 5.09 1458.21 5.70 0.00 0.00 86962.50 12857.54 71403.03 00:12:51.612 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x80000 length 0x80000 00:12:51.612 Nvme2n2 : 5.06 1541.88 6.02 0.00 0.00 82265.25 15166.90 71403.03 00:12:51.612 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0x80000 00:12:51.612 Nvme2n3 : 5.09 1457.70 5.69 0.00 0.00 86831.23 13294.45 75397.61 00:12:51.612 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x80000 length 0x80000 00:12:51.612 Nvme2n3 : 5.07 1541.40 6.02 0.00 0.00 82137.40 14417.92 71403.03 00:12:51.612 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x0 length 0x20000 00:12:51.612 Nvme3n1 : 5.09 1457.18 5.69 0.00 0.00 86703.35 12420.63 76396.25 00:12:51.612 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.612 Verification LBA range: start 0x20000 length 0x20000 00:12:51.612 Nvme3n1 : 5.07 1551.77 6.06 0.00 0.00 81487.76 3073.95 73899.64 00:12:51.612 [2024-11-20T15:24:37.570Z] =================================================================================================================== 00:12:51.612 [2024-11-20T15:24:37.570Z] Total : 18014.46 70.37 0.00 0.00 84612.06 3073.95 77394.90 00:12:53.515 00:12:53.515 real 0m7.907s 00:12:53.515 user 0m14.517s 00:12:53.515 sys 0m0.384s 00:12:53.515 15:24:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.515 15:24:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:53.515 ************************************ 00:12:53.515 END TEST bdev_verify 00:12:53.515 ************************************ 00:12:53.515 15:24:39 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.515 15:24:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:53.515 15:24:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.515 15:24:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.515 ************************************ 00:12:53.515 START TEST bdev_verify_big_io 00:12:53.515 ************************************ 00:12:53.515 15:24:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.515 [2024-11-20 15:24:39.270617] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:53.515 [2024-11-20 15:24:39.270738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61839 ] 00:12:53.515 [2024-11-20 15:24:39.442132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.774 [2024-11-20 15:24:39.557674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.774 [2024-11-20 15:24:39.557701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.711 Running I/O for 5 seconds... 00:12:58.640 1933.00 IOPS, 120.81 MiB/s [2024-11-20T15:24:45.974Z] 2746.50 IOPS, 171.66 MiB/s [2024-11-20T15:24:46.232Z] 2606.33 IOPS, 162.90 MiB/s [2024-11-20T15:24:46.232Z] 2649.00 IOPS, 165.56 MiB/s 00:13:00.274 Latency(us) 00:13:00.274 [2024-11-20T15:24:46.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.274 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0xbd0b 00:13:00.274 Nvme0n1 : 5.53 150.34 9.40 0.00 0.00 818969.39 18225.25 854839.10 00:13:00.274 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:00.274 Nvme0n1 : 5.56 158.88 9.93 0.00 0.00 785107.44 21346.01 794920.47 00:13:00.274 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0xa000 00:13:00.274 Nvme1n1 : 5.64 145.10 9.07 0.00 0.00 817692.11 64412.53 1398101.33 00:13:00.274 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0xa000 length 0xa000 00:13:00.274 Nvme1n1 : 5.57 156.51 9.78 0.00 0.00 775417.97 55924.05 699050.67 00:13:00.274 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0x8000 00:13:00.274 Nvme2n1 : 5.68 154.64 9.66 0.00 0.00 760995.27 42692.02 1414079.63 00:13:00.274 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x8000 length 0x8000 00:13:00.274 Nvme2n1 : 5.64 159.38 9.96 0.00 0.00 741349.87 59668.97 818887.92 00:13:00.274 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0x8000 00:13:00.274 Nvme2n2 : 5.72 159.66 9.98 0.00 0.00 722173.60 20971.52 1446036.24 00:13:00.274 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x8000 length 0x8000 00:13:00.274 Nvme2n2 : 5.64 162.16 10.14 0.00 0.00 715365.67 45937.62 703045.24 00:13:00.274 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0x8000 00:13:00.274 Nvme2n3 : 5.74 165.06 10.32 0.00 0.00 682885.58 15166.90 1462014.54 00:13:00.274 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x8000 length 0x8000 00:13:00.274 Nvme2n3 : 5.68 176.85 11.05 0.00 0.00 647976.01 8426.06 822882.50 00:13:00.274 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x0 length 0x2000 00:13:00.274 Nvme3n1 : 5.75 185.80 11.61 0.00 0.00 594707.04 6210.32 1038589.56 00:13:00.274 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:00.274 Verification LBA range: start 0x2000 length 0x2000 00:13:00.274 Nvme3n1 : 5.68 180.29 11.27 0.00 0.00 619578.04 5430.13 750980.14 00:13:00.274 [2024-11-20T15:24:46.232Z] =================================================================================================================== 00:13:00.274 [2024-11-20T15:24:46.232Z] Total : 1954.67 122.17 0.00 0.00 717914.87 5430.13 1462014.54 00:13:02.178 00:13:02.179 real 0m8.919s 00:13:02.179 user 0m16.667s 00:13:02.179 sys 0m0.319s 00:13:02.179 15:24:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.179 15:24:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.179 ************************************ 00:13:02.179 END TEST bdev_verify_big_io 00:13:02.179 ************************************ 00:13:02.437 15:24:48 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.437 15:24:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:02.437 15:24:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.437 15:24:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.437 ************************************ 00:13:02.437 START TEST bdev_write_zeroes 00:13:02.437 ************************************ 00:13:02.437 15:24:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.437 [2024-11-20 15:24:48.257817] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:02.437 [2024-11-20 15:24:48.257938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:13:02.696 [2024-11-20 15:24:48.434532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.696 [2024-11-20 15:24:48.555273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.630 Running I/O for 1 seconds... 00:13:04.605 57984.00 IOPS, 226.50 MiB/s 00:13:04.605 Latency(us) 00:13:04.605 [2024-11-20T15:24:50.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.605 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme0n1 : 1.03 9597.78 37.49 0.00 0.00 13303.16 7739.49 28086.86 00:13:04.605 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme1n1 : 1.03 9583.20 37.43 0.00 0.00 13305.27 9799.19 27462.70 00:13:04.605 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme2n1 : 1.03 9568.94 37.38 0.00 0.00 13268.92 9424.70 26838.55 00:13:04.605 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme2n2 : 1.03 9554.76 37.32 0.00 0.00 13216.28 8113.98 26089.57 00:13:04.605 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme2n3 : 1.03 9540.55 37.27 0.00 0.00 13215.20 7614.66 26339.23 00:13:04.605 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.605 Nvme3n1 : 1.03 9526.37 37.21 0.00 0.00 13211.24 7365.00 28336.52 00:13:04.605 [2024-11-20T15:24:50.563Z] =================================================================================================================== 00:13:04.605 [2024-11-20T15:24:50.563Z] Total : 57371.60 224.11 0.00 0.00 13253.35 7365.00 28336.52 00:13:05.983 00:13:05.983 real 0m3.412s 00:13:05.983 user 0m3.016s 00:13:05.983 sys 0m0.277s 00:13:05.983 15:24:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.983 15:24:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 END TEST bdev_write_zeroes 00:13:05.983 ************************************ 00:13:05.983 15:24:51 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.983 15:24:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:05.983 15:24:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.983 15:24:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.983 ************************************ 00:13:05.983 START TEST bdev_json_nonenclosed 00:13:05.983 ************************************ 00:13:05.983 15:24:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:05.983 [2024-11-20 15:24:51.778560] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:05.983 [2024-11-20 15:24:51.778759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62012 ] 00:13:06.243 [2024-11-20 15:24:51.971397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.243 [2024-11-20 15:24:52.092789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.243 [2024-11-20 15:24:52.092894] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:06.243 [2024-11-20 15:24:52.092916] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:06.243 [2024-11-20 15:24:52.092929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:06.503 00:13:06.503 real 0m0.705s 00:13:06.503 user 0m0.434s 00:13:06.503 sys 0m0.165s 00:13:06.503 15:24:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.503 15:24:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:06.503 ************************************ 00:13:06.503 END TEST bdev_json_nonenclosed 00:13:06.503 ************************************ 00:13:06.503 15:24:52 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:06.503 15:24:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:06.503 15:24:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.503 15:24:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.503 ************************************ 00:13:06.503 START TEST bdev_json_nonarray 00:13:06.503 ************************************ 00:13:06.503 15:24:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:06.762 [2024-11-20 15:24:52.541343] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:06.762 [2024-11-20 15:24:52.541524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62037 ] 00:13:07.021 [2024-11-20 15:24:52.733170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.021 [2024-11-20 15:24:52.853264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.021 [2024-11-20 15:24:52.853374] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:07.021 [2024-11-20 15:24:52.853398] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:07.021 [2024-11-20 15:24:52.853411] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:07.279 00:13:07.279 real 0m0.703s 00:13:07.279 user 0m0.439s 00:13:07.279 sys 0m0.158s 00:13:07.279 15:24:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.279 15:24:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:07.279 ************************************ 00:13:07.279 END TEST bdev_json_nonarray 00:13:07.279 ************************************ 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:07.279 15:24:53 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:07.279 00:13:07.279 real 0m44.521s 00:13:07.279 user 1m6.150s 00:13:07.279 sys 0m8.156s 00:13:07.279 15:24:53 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.279 ************************************ 00:13:07.279 END TEST blockdev_nvme 00:13:07.279 ************************************ 00:13:07.279 15:24:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.538 15:24:53 -- spdk/autotest.sh@209 -- # uname -s 00:13:07.538 15:24:53 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:07.538 15:24:53 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:07.538 15:24:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.538 15:24:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.538 15:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:07.538 ************************************ 00:13:07.538 START TEST blockdev_nvme_gpt 00:13:07.538 ************************************ 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:07.538 * Looking for test storage... 00:13:07.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.538 15:24:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.538 --rc genhtml_branch_coverage=1 00:13:07.538 --rc genhtml_function_coverage=1 00:13:07.538 --rc genhtml_legend=1 00:13:07.538 --rc geninfo_all_blocks=1 00:13:07.538 --rc geninfo_unexecuted_blocks=1 00:13:07.538 00:13:07.538 ' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.538 --rc genhtml_branch_coverage=1 00:13:07.538 --rc genhtml_function_coverage=1 00:13:07.538 --rc genhtml_legend=1 00:13:07.538 --rc geninfo_all_blocks=1 00:13:07.538 --rc geninfo_unexecuted_blocks=1 00:13:07.538 00:13:07.538 ' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.538 --rc genhtml_branch_coverage=1 00:13:07.538 --rc genhtml_function_coverage=1 00:13:07.538 --rc genhtml_legend=1 00:13:07.538 --rc geninfo_all_blocks=1 00:13:07.538 --rc geninfo_unexecuted_blocks=1 00:13:07.538 00:13:07.538 ' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.538 --rc genhtml_branch_coverage=1 00:13:07.538 --rc genhtml_function_coverage=1 00:13:07.538 --rc genhtml_legend=1 00:13:07.538 --rc geninfo_all_blocks=1 00:13:07.538 --rc geninfo_unexecuted_blocks=1 00:13:07.538 00:13:07.538 ' 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:07.538 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62123 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62123 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62123 ']' 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.539 15:24:53 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.539 15:24:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 [2024-11-20 15:24:53.598211] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:07.797 [2024-11-20 15:24:53.598392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:13:08.056 [2024-11-20 15:24:53.790932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.056 [2024-11-20 15:24:53.911632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.992 15:24:54 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.992 15:24:54 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:13:08.992 15:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:08.992 15:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:13:08.992 15:24:54 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:09.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:09.510 Waiting for block devices as requested 00:13:09.769 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.769 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.769 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.028 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.297 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:15.297 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:15.297 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:15.298 BYT; 00:13:15.298 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:15.298 BYT; 00:13:15.298 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:15.298 15:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:15.298 15:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:15.298 15:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:15.298 15:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:15.298 15:25:01 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:15.298 15:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:15.298 15:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:16.233 The operation has completed successfully. 00:13:16.233 15:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:17.172 The operation has completed successfully. 00:13:17.172 15:25:03 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:17.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.676 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.676 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.676 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:18.676 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.676 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:18.676 [] 00:13:18.676 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.676 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:18.676 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:18.676 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:18.676 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:18.934 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:18.934 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.934 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.193 15:25:04 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:19.193 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.193 15:25:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:19.193 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.193 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.452 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.452 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:19.452 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "306b15ab-c6d9-4e51-99c1-b3932e22e1cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "306b15ab-c6d9-4e51-99c1-b3932e22e1cf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compar 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:19.453 e_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c0bf0bf0-2673-4158-8135-9183ee9c4229"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0bf0bf0-2673-4158-8135-9183ee9c4229",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9e340d18-eb8a-493d-9223-3f9dfee2eba7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e340d18-eb8a-493d-9223-3f9dfee2eba7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b8eff340-5ea5-43cd-a110-5f802a10e51a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8eff340-5ea5-43cd-a110-5f802a10e51a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "effc85b0-1b57-446d-bcd2-351ca20fe39f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "effc85b0-1b57-446d-bcd2-351ca20fe39f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:19.453 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:19.453 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:13:19.453 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:19.453 15:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62123 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62123 ']' 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62123 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62123 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.453 killing process with pid 62123 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62123' 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62123 00:13:19.453 15:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62123 00:13:21.988 15:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:21.988 15:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:21.988 15:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:21.988 15:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.988 15:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:21.988 ************************************ 00:13:21.988 START TEST bdev_hello_world 00:13:21.988 ************************************ 00:13:21.988 15:25:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:22.247 [2024-11-20 15:25:07.994450] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:22.247 [2024-11-20 15:25:07.994711] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62764 ] 00:13:22.247 [2024-11-20 15:25:08.183577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.505 [2024-11-20 15:25:08.309990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.073 [2024-11-20 15:25:08.984311] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:23.073 [2024-11-20 15:25:08.984361] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:23.073 [2024-11-20 15:25:08.984399] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:23.073 [2024-11-20 15:25:08.987488] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:23.073 [2024-11-20 15:25:08.988015] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:23.073 [2024-11-20 15:25:08.988050] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:23.073 [2024-11-20 15:25:08.988325] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:23.073 00:13:23.073 [2024-11-20 15:25:08.988361] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:24.448 00:13:24.448 real 0m2.297s 00:13:24.448 user 0m1.894s 00:13:24.448 sys 0m0.293s 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:24.448 ************************************ 00:13:24.448 END TEST bdev_hello_world 00:13:24.448 ************************************ 00:13:24.448 15:25:10 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:24.448 15:25:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.448 15:25:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.448 15:25:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.448 ************************************ 00:13:24.448 START TEST bdev_bounds 00:13:24.448 ************************************ 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62812 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:24.448 Process bdevio pid: 62812 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62812' 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62812 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62812 ']' 00:13:24.448 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.449 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.449 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.449 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.449 15:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:24.449 [2024-11-20 15:25:10.313664] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:24.449 [2024-11-20 15:25:10.313822] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62812 ] 00:13:24.707 [2024-11-20 15:25:10.487300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.707 [2024-11-20 15:25:10.615533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.707 [2024-11-20 15:25:10.615676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.707 [2024-11-20 15:25:10.615703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.643 15:25:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.643 15:25:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:25.643 15:25:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:25.643 I/O targets: 00:13:25.643 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:25.643 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:25.643 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:25.643 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:25.643 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:25.643 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:25.643 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:25.643 00:13:25.643 00:13:25.643 CUnit - A unit testing framework for C - Version 2.1-3 00:13:25.643 http://cunit.sourceforge.net/ 00:13:25.643 00:13:25.643 00:13:25.643 Suite: bdevio tests on: Nvme3n1 00:13:25.643 Test: blockdev write read block ...passed 00:13:25.643 Test: blockdev write zeroes read block ...passed 00:13:25.643 Test: blockdev write zeroes read no split ...passed 00:13:25.643 Test: blockdev write zeroes read split ...passed 00:13:25.643 Test: blockdev write zeroes read split partial ...passed 00:13:25.643 Test: blockdev reset ...[2024-11-20 15:25:11.545262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:25.643 [2024-11-20 15:25:11.549611] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:25.643 passed 00:13:25.643 Test: blockdev write read 8 blocks ...passed 00:13:25.643 Test: blockdev write read size > 128k ...passed 00:13:25.643 Test: blockdev write read invalid size ...passed 00:13:25.643 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:25.643 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:25.643 Test: blockdev write read max offset ...passed 00:13:25.643 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:25.643 Test: blockdev writev readv 8 blocks ...passed 00:13:25.643 Test: blockdev writev readv 30 x 1block ...passed 00:13:25.643 Test: blockdev writev readv block ...passed 00:13:25.643 Test: blockdev writev readv size > 128k ...passed 00:13:25.643 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:25.643 Test: blockdev comparev and writev ...[2024-11-20 15:25:11.558125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba004000 len:0x1000 00:13:25.643 [2024-11-20 15:25:11.558190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:25.643 passed 00:13:25.643 Test: blockdev nvme passthru rw ...passed 00:13:25.643 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:25:11.559017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:25.643 [2024-11-20 15:25:11.559068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:25.644 passed 00:13:25.644 Test: blockdev nvme admin passthru ...passed 00:13:25.644 Test: blockdev copy ...passed 00:13:25.644 Suite: bdevio tests on: Nvme2n3 00:13:25.644 Test: blockdev write read block ...passed 00:13:25.644 Test: blockdev write zeroes read block ...passed 00:13:25.644 Test: blockdev write zeroes read no split ...passed 00:13:25.902 Test: blockdev write zeroes read split ...passed 00:13:25.902 Test: blockdev write zeroes read split partial ...passed 00:13:25.902 Test: blockdev reset ...[2024-11-20 15:25:11.636808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:25.902 [2024-11-20 15:25:11.641535] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:25.902 passed 00:13:25.902 Test: blockdev write read 8 blocks ...passed 00:13:25.902 Test: blockdev write read size > 128k ...passed 00:13:25.902 Test: blockdev write read invalid size ...passed 00:13:25.902 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:25.902 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:25.903 Test: blockdev write read max offset ...passed 00:13:25.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:25.903 Test: blockdev writev readv 8 blocks ...passed 00:13:25.903 Test: blockdev writev readv 30 x 1block ...passed 00:13:25.903 Test: blockdev writev readv block ...passed 00:13:25.903 Test: blockdev writev readv size > 128k ...passed 00:13:25.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:25.903 Test: blockdev comparev and writev ...[2024-11-20 15:25:11.650450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba002000 len:0x1000 00:13:25.903 [2024-11-20 15:25:11.650515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev nvme passthru rw ...passed 00:13:25.903 Test: blockdev nvme passthru vendor specific ...passed 00:13:25.903 Test: blockdev nvme admin passthru ...[2024-11-20 15:25:11.651204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:25.903 [2024-11-20 15:25:11.651254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev copy ...passed 00:13:25.903 Suite: bdevio tests on: Nvme2n2 00:13:25.903 Test: blockdev write read block ...passed 00:13:25.903 Test: blockdev write zeroes read block ...passed 00:13:25.903 Test: blockdev write zeroes read no split ...passed 00:13:25.903 Test: blockdev write zeroes read split ...passed 00:13:25.903 Test: blockdev write zeroes read split partial ...passed 00:13:25.903 Test: blockdev reset ...[2024-11-20 15:25:11.730254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:25.903 [2024-11-20 15:25:11.734818] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:25.903 passed 00:13:25.903 Test: blockdev write read 8 blocks ...passed 00:13:25.903 Test: blockdev write read size > 128k ...passed 00:13:25.903 Test: blockdev write read invalid size ...passed 00:13:25.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:25.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:25.903 Test: blockdev write read max offset ...passed 00:13:25.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:25.903 Test: blockdev writev readv 8 blocks ...passed 00:13:25.903 Test: blockdev writev readv 30 x 1block ...passed 00:13:25.903 Test: blockdev writev readv block ...passed 00:13:25.903 Test: blockdev writev readv size > 128k ...passed 00:13:25.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:25.903 Test: blockdev comparev and writev ...[2024-11-20 15:25:11.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce638000 len:0x1000 00:13:25.903 [2024-11-20 15:25:11.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev nvme passthru rw ...passed 00:13:25.903 Test: blockdev nvme passthru vendor specific ...passed 00:13:25.903 Test: blockdev nvme admin passthru ...[2024-11-20 15:25:11.744166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:25.903 [2024-11-20 15:25:11.744210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev copy ...passed 00:13:25.903 Suite: bdevio tests on: Nvme2n1 00:13:25.903 Test: blockdev write read block ...passed 00:13:25.903 Test: blockdev write zeroes read block ...passed 00:13:25.903 Test: blockdev write zeroes read no split ...passed 00:13:25.903 Test: blockdev write zeroes read split ...passed 00:13:25.903 Test: blockdev write zeroes read split partial ...passed 00:13:25.903 Test: blockdev reset ...[2024-11-20 15:25:11.823753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:25.903 [2024-11-20 15:25:11.828254] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:25.903 passed 00:13:25.903 Test: blockdev write read 8 blocks ...passed 00:13:25.903 Test: blockdev write read size > 128k ...passed 00:13:25.903 Test: blockdev write read invalid size ...passed 00:13:25.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:25.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:25.903 Test: blockdev write read max offset ...passed 00:13:25.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:25.903 Test: blockdev writev readv 8 blocks ...passed 00:13:25.903 Test: blockdev writev readv 30 x 1block ...passed 00:13:25.903 Test: blockdev writev readv block ...passed 00:13:25.903 Test: blockdev writev readv size > 128k ...passed 00:13:25.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:25.903 Test: blockdev comparev and writev ...[2024-11-20 15:25:11.837022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce634000 len:0x1000 00:13:25.903 [2024-11-20 15:25:11.837091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev nvme passthru rw ...passed 00:13:25.903 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:25:11.838003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:25.903 [2024-11-20 15:25:11.838056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:25.903 passed 00:13:25.903 Test: blockdev nvme admin passthru ...passed 00:13:25.903 Test: blockdev copy ...passed 00:13:25.903 Suite: bdevio tests on: Nvme1n1p2 00:13:25.903 Test: blockdev write read block ...passed 00:13:25.903 Test: blockdev write zeroes read block ...passed 00:13:25.903 Test: blockdev write zeroes read no split ...passed 00:13:26.162 Test: blockdev write zeroes read split ...passed 00:13:26.162 Test: blockdev write zeroes read split partial ...passed 00:13:26.162 Test: blockdev reset ...[2024-11-20 15:25:11.918548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:26.162 [2024-11-20 15:25:11.922866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:26.162 passed 00:13:26.162 Test: blockdev write read 8 blocks ...passed 00:13:26.162 Test: blockdev write read size > 128k ...passed 00:13:26.162 Test: blockdev write read invalid size ...passed 00:13:26.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:26.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:26.162 Test: blockdev write read max offset ...passed 00:13:26.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:26.162 Test: blockdev writev readv 8 blocks ...passed 00:13:26.162 Test: blockdev writev readv 30 x 1block ...passed 00:13:26.162 Test: blockdev writev readv block ...passed 00:13:26.162 Test: blockdev writev readv size > 128k ...passed 00:13:26.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:26.162 Test: blockdev comparev and writev ...[2024-11-20 15:25:11.931960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ce630000 len:0x1000 00:13:26.162 [2024-11-20 15:25:11.932024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:26.162 passed 00:13:26.162 Test: blockdev nvme passthru rw ...passed 00:13:26.162 Test: blockdev nvme passthru vendor specific ...passed 00:13:26.162 Test: blockdev nvme admin passthru ...passed 00:13:26.162 Test: blockdev copy ...passed 00:13:26.162 Suite: bdevio tests on: Nvme1n1p1 00:13:26.162 Test: blockdev write read block ...passed 00:13:26.162 Test: blockdev write zeroes read block ...passed 00:13:26.162 Test: blockdev write zeroes read no split ...passed 00:13:26.162 Test: blockdev write zeroes read split ...passed 00:13:26.162 Test: blockdev write zeroes read split partial ...passed 00:13:26.162 Test: blockdev reset ...[2024-11-20 15:25:12.003123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:26.163 [2024-11-20 15:25:12.007406] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:26.163 passed 00:13:26.163 Test: blockdev write read 8 blocks ...passed 00:13:26.163 Test: blockdev write read size > 128k ...passed 00:13:26.163 Test: blockdev write read invalid size ...passed 00:13:26.163 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:26.163 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:26.163 Test: blockdev write read max offset ...passed 00:13:26.163 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:26.163 Test: blockdev writev readv 8 blocks ...passed 00:13:26.163 Test: blockdev writev readv 30 x 1block ...passed 00:13:26.163 Test: blockdev writev readv block ...passed 00:13:26.163 Test: blockdev writev readv size > 128k ...passed 00:13:26.163 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:26.163 Test: blockdev comparev and writev ...[2024-11-20 15:25:12.016036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2baa0e000 len:0x1000 00:13:26.163 [2024-11-20 15:25:12.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:26.163 passed 00:13:26.163 Test: blockdev nvme passthru rw ...passed 00:13:26.163 Test: blockdev nvme passthru vendor specific ...passed 00:13:26.163 Test: blockdev nvme admin passthru ...passed 00:13:26.163 Test: blockdev copy ...passed 00:13:26.163 Suite: bdevio tests on: Nvme0n1 00:13:26.163 Test: blockdev write read block ...passed 00:13:26.163 Test: blockdev write zeroes read block ...passed 00:13:26.163 Test: blockdev write zeroes read no split ...passed 00:13:26.163 Test: blockdev write zeroes read split ...passed 00:13:26.163 Test: blockdev write zeroes read split partial ...passed 00:13:26.163 Test: blockdev reset ...[2024-11-20 15:25:12.090476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:26.163 [2024-11-20 15:25:12.094872] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:26.163 passed 00:13:26.163 Test: blockdev write read 8 blocks ...passed 00:13:26.163 Test: blockdev write read size > 128k ...passed 00:13:26.163 Test: blockdev write read invalid size ...passed 00:13:26.163 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:26.163 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:26.163 Test: blockdev write read max offset ...passed 00:13:26.163 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:26.163 Test: blockdev writev readv 8 blocks ...passed 00:13:26.163 Test: blockdev writev readv 30 x 1block ...passed 00:13:26.163 Test: blockdev writev readv block ...passed 00:13:26.163 Test: blockdev writev readv size > 128k ...passed 00:13:26.163 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:26.163 Test: blockdev comparev and writev ...passed 00:13:26.163 Test: blockdev nvme passthru rw ...[2024-11-20 15:25:12.102528] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:26.163 separate metadata which is not supported yet. 00:13:26.163 passed 00:13:26.163 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:25:12.103263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:26.163 [2024-11-20 15:25:12.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:26.163 passed 00:13:26.163 Test: blockdev nvme admin passthru ...passed 00:13:26.163 Test: blockdev copy ...passed 00:13:26.163 00:13:26.163 Run Summary: Type Total Ran Passed Failed Inactive 00:13:26.163 suites 7 7 n/a 0 0 00:13:26.163 tests 161 161 161 0 0 00:13:26.163 asserts 1025 1025 1025 0 n/a 00:13:26.163 00:13:26.163 Elapsed time = 1.749 seconds 00:13:26.163 0 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62812 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62812 ']' 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62812 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62812 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.422 killing process with pid 62812 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62812' 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62812 00:13:26.422 15:25:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62812 00:13:27.359 15:25:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:27.359 00:13:27.359 real 0m3.035s 00:13:27.359 user 0m8.002s 00:13:27.359 sys 0m0.431s 00:13:27.359 ************************************ 00:13:27.359 END TEST bdev_bounds 00:13:27.359 ************************************ 00:13:27.359 15:25:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.359 15:25:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:27.359 15:25:13 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:27.359 15:25:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:27.359 15:25:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.359 15:25:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:27.617 ************************************ 00:13:27.617 START TEST bdev_nbd 00:13:27.617 ************************************ 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62877 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62877 /var/tmp/spdk-nbd.sock 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62877 ']' 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:27.617 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:27.618 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.618 15:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:27.618 [2024-11-20 15:25:13.422004] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:27.618 [2024-11-20 15:25:13.422150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.875 [2024-11-20 15:25:13.596906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.875 [2024-11-20 15:25:13.720914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:28.808 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.809 1+0 records in 00:13:28.809 1+0 records out 00:13:28.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443968 s, 9.2 MB/s 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:28.809 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.067 1+0 records in 00:13:29.067 1+0 records out 00:13:29.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424694 s, 9.6 MB/s 00:13:29.067 15:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:29.067 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.633 1+0 records in 00:13:29.633 1+0 records out 00:13:29.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576359 s, 7.1 MB/s 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:29.633 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.892 1+0 records in 00:13:29.892 1+0 records out 00:13:29.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528372 s, 7.8 MB/s 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:29.892 15:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.151 1+0 records in 00:13:30.151 1+0 records out 00:13:30.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533121 s, 7.7 MB/s 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:30.151 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.718 1+0 records in 00:13:30.718 1+0 records out 00:13:30.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723222 s, 5.7 MB/s 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:30.718 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.977 1+0 records in 00:13:30.977 1+0 records out 00:13:30.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800754 s, 5.1 MB/s 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:30.977 15:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd0", 00:13:31.236 "bdev_name": "Nvme0n1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd1", 00:13:31.236 "bdev_name": "Nvme1n1p1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd2", 00:13:31.236 "bdev_name": "Nvme1n1p2" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd3", 00:13:31.236 "bdev_name": "Nvme2n1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd4", 00:13:31.236 "bdev_name": "Nvme2n2" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd5", 00:13:31.236 "bdev_name": "Nvme2n3" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd6", 00:13:31.236 "bdev_name": "Nvme3n1" 00:13:31.236 } 00:13:31.236 ]' 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd0", 00:13:31.236 "bdev_name": "Nvme0n1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd1", 00:13:31.236 "bdev_name": "Nvme1n1p1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd2", 00:13:31.236 "bdev_name": "Nvme1n1p2" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd3", 00:13:31.236 "bdev_name": "Nvme2n1" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd4", 00:13:31.236 "bdev_name": "Nvme2n2" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd5", 00:13:31.236 "bdev_name": "Nvme2n3" 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "nbd_device": "/dev/nbd6", 00:13:31.236 "bdev_name": "Nvme3n1" 00:13:31.236 } 00:13:31.236 ]' 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.236 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.494 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.495 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.063 15:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.323 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.582 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:32.841 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.842 15:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.409 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:33.668 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:33.668 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:33.668 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:33.927 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:34.185 /dev/nbd0 00:13:34.185 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.185 15:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.185 15:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.186 15:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.186 15:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.186 15:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.186 15:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.186 1+0 records in 00:13:34.186 1+0 records out 00:13:34.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123182 s, 3.3 MB/s 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:34.186 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:34.444 /dev/nbd1 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.444 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.445 1+0 records in 00:13:34.445 1+0 records out 00:13:34.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536547 s, 7.6 MB/s 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:34.445 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:34.703 /dev/nbd10 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.962 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.962 1+0 records in 00:13:34.962 1+0 records out 00:13:34.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628581 s, 6.5 MB/s 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:34.963 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:35.221 /dev/nbd11 00:13:35.221 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:35.221 15:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:35.221 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:35.221 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.221 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.222 1+0 records in 00:13:35.222 1+0 records out 00:13:35.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838575 s, 4.9 MB/s 00:13:35.222 15:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:35.222 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:35.481 /dev/nbd12 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.481 1+0 records in 00:13:35.481 1+0 records out 00:13:35.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526719 s, 7.8 MB/s 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:35.481 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:35.741 /dev/nbd13 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.741 1+0 records in 00:13:35.741 1+0 records out 00:13:35.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726323 s, 5.6 MB/s 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:35.741 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:36.000 /dev/nbd14 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.000 1+0 records in 00:13:36.000 1+0 records out 00:13:36.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828191 s, 4.9 MB/s 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.000 15:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd0", 00:13:36.259 "bdev_name": "Nvme0n1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd1", 00:13:36.259 "bdev_name": "Nvme1n1p1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd10", 00:13:36.259 "bdev_name": "Nvme1n1p2" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd11", 00:13:36.259 "bdev_name": "Nvme2n1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd12", 00:13:36.259 "bdev_name": "Nvme2n2" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd13", 00:13:36.259 "bdev_name": "Nvme2n3" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd14", 00:13:36.259 "bdev_name": "Nvme3n1" 00:13:36.259 } 00:13:36.259 ]' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd0", 00:13:36.259 "bdev_name": "Nvme0n1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd1", 00:13:36.259 "bdev_name": "Nvme1n1p1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd10", 00:13:36.259 "bdev_name": "Nvme1n1p2" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd11", 00:13:36.259 "bdev_name": "Nvme2n1" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd12", 00:13:36.259 "bdev_name": "Nvme2n2" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd13", 00:13:36.259 "bdev_name": "Nvme2n3" 00:13:36.259 }, 00:13:36.259 { 00:13:36.259 "nbd_device": "/dev/nbd14", 00:13:36.259 "bdev_name": "Nvme3n1" 00:13:36.259 } 00:13:36.259 ]' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:36.259 /dev/nbd1 00:13:36.259 /dev/nbd10 00:13:36.259 /dev/nbd11 00:13:36.259 /dev/nbd12 00:13:36.259 /dev/nbd13 00:13:36.259 /dev/nbd14' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:36.259 /dev/nbd1 00:13:36.259 /dev/nbd10 00:13:36.259 /dev/nbd11 00:13:36.259 /dev/nbd12 00:13:36.259 /dev/nbd13 00:13:36.259 /dev/nbd14' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:36.259 256+0 records in 00:13:36.259 256+0 records out 00:13:36.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776869 s, 135 MB/s 00:13:36.259 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.260 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:36.519 256+0 records in 00:13:36.519 256+0 records out 00:13:36.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132982 s, 7.9 MB/s 00:13:36.519 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.519 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:36.519 256+0 records in 00:13:36.519 256+0 records out 00:13:36.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14559 s, 7.2 MB/s 00:13:36.519 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.519 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:36.779 256+0 records in 00:13:36.779 256+0 records out 00:13:36.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147215 s, 7.1 MB/s 00:13:36.779 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.779 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:37.038 256+0 records in 00:13:37.038 256+0 records out 00:13:37.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150401 s, 7.0 MB/s 00:13:37.038 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.038 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:37.038 256+0 records in 00:13:37.038 256+0 records out 00:13:37.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14605 s, 7.2 MB/s 00:13:37.038 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.038 15:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:37.298 256+0 records in 00:13:37.298 256+0 records out 00:13:37.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15033 s, 7.0 MB/s 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:37.298 256+0 records in 00:13:37.298 256+0 records out 00:13:37.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145656 s, 7.2 MB/s 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.298 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.558 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.816 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.075 15:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.334 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.593 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.852 15:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.418 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.677 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:39.936 15:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:40.195 malloc_lvol_verify 00:13:40.195 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:40.465 ed0540a6-2236-4d6c-a31d-c0b71745c053 00:13:40.465 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:40.725 5cb205a4-b577-4435-95d4-5c3cffeaadd3 00:13:40.725 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:40.984 /dev/nbd0 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:40.984 mke2fs 1.47.0 (5-Feb-2023) 00:13:40.984 Discarding device blocks: 0/4096 done 00:13:40.984 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:40.984 00:13:40.984 Allocating group tables: 0/1 done 00:13:40.984 Writing inode tables: 0/1 done 00:13:40.984 Creating journal (1024 blocks): done 00:13:40.984 Writing superblocks and filesystem accounting information: 0/1 done 00:13:40.984 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.984 15:25:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.242 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62877 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62877 ']' 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62877 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62877 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.501 killing process with pid 62877 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62877' 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62877 00:13:41.501 15:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62877 00:13:42.878 15:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:42.878 00:13:42.878 real 0m15.326s 00:13:42.878 user 0m20.366s 00:13:42.878 sys 0m6.422s 00:13:42.878 15:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.878 15:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:42.878 ************************************ 00:13:42.878 END TEST bdev_nbd 00:13:42.878 ************************************ 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:13:42.878 skipping fio tests on NVMe due to multi-ns failures. 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:42.878 15:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:42.878 15:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:42.878 15:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.878 15:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:42.878 ************************************ 00:13:42.878 START TEST bdev_verify 00:13:42.878 ************************************ 00:13:42.878 15:25:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:42.878 [2024-11-20 15:25:28.800468] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:42.878 [2024-11-20 15:25:28.800603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:13:43.136 [2024-11-20 15:25:28.969859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.136 [2024-11-20 15:25:29.085790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.136 [2024-11-20 15:25:29.085821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.078 Running I/O for 5 seconds... 00:13:46.391 18112.00 IOPS, 70.75 MiB/s [2024-11-20T15:25:33.287Z] 18464.00 IOPS, 72.12 MiB/s [2024-11-20T15:25:34.294Z] 18773.33 IOPS, 73.33 MiB/s [2024-11-20T15:25:35.231Z] 18640.00 IOPS, 72.81 MiB/s [2024-11-20T15:25:35.231Z] 18252.80 IOPS, 71.30 MiB/s 00:13:49.273 Latency(us) 00:13:49.273 [2024-11-20T15:25:35.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.273 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0xbd0bd 00:13:49.273 Nvme0n1 : 5.09 1359.20 5.31 0.00 0.00 93984.81 20472.20 95370.48 00:13:49.273 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:49.273 Nvme0n1 : 5.07 1211.71 4.73 0.00 0.00 105365.83 22219.82 91875.23 00:13:49.273 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x4ff80 00:13:49.273 Nvme1n1p1 : 5.09 1357.94 5.30 0.00 0.00 93931.45 22719.15 91875.23 00:13:49.273 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:49.273 Nvme1n1p1 : 5.07 1211.15 4.73 0.00 0.00 105194.72 25340.59 89378.62 00:13:49.273 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x4ff7f 00:13:49.273 Nvme1n1p2 : 5.09 1357.32 5.30 0.00 0.00 93642.86 23218.47 87880.66 00:13:49.273 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:49.273 Nvme1n1p2 : 5.08 1210.61 4.73 0.00 0.00 104972.21 25715.08 87880.66 00:13:49.273 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x80000 00:13:49.273 Nvme2n1 : 5.09 1356.81 5.30 0.00 0.00 93455.65 23343.30 85883.37 00:13:49.273 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x80000 length 0x80000 00:13:49.273 Nvme2n1 : 5.08 1210.16 4.73 0.00 0.00 104807.08 25715.08 85384.05 00:13:49.273 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x80000 00:13:49.273 Nvme2n2 : 5.10 1356.33 5.30 0.00 0.00 93265.55 23218.47 88879.30 00:13:49.273 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x80000 length 0x80000 00:13:49.273 Nvme2n2 : 5.08 1209.64 4.73 0.00 0.00 104645.73 25090.93 84385.40 00:13:49.273 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x80000 00:13:49.273 Nvme2n3 : 5.10 1355.85 5.30 0.00 0.00 93064.21 20472.20 91875.23 00:13:49.273 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x80000 length 0x80000 00:13:49.273 Nvme2n3 : 5.08 1209.11 4.72 0.00 0.00 104481.00 21096.35 87880.66 00:13:49.273 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x0 length 0x20000 00:13:49.273 Nvme3n1 : 5.10 1355.38 5.29 0.00 0.00 92911.87 15728.64 93872.52 00:13:49.273 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.273 Verification LBA range: start 0x20000 length 0x20000 00:13:49.273 Nvme3n1 : 5.09 1219.41 4.76 0.00 0.00 103489.31 2949.12 90876.59 00:13:49.273 [2024-11-20T15:25:35.231Z] =================================================================================================================== 00:13:49.273 [2024-11-20T15:25:35.231Z] Total : 17980.62 70.24 0.00 0.00 98759.23 2949.12 95370.48 00:13:50.654 00:13:50.654 real 0m7.789s 00:13:50.654 user 0m14.400s 00:13:50.654 sys 0m0.320s 00:13:50.654 15:25:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.654 15:25:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:50.654 ************************************ 00:13:50.654 END TEST bdev_verify 00:13:50.654 ************************************ 00:13:50.654 15:25:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.654 15:25:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:50.654 15:25:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.654 15:25:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.654 ************************************ 00:13:50.654 START TEST bdev_verify_big_io 00:13:50.654 ************************************ 00:13:50.654 15:25:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.913 [2024-11-20 15:25:36.657147] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:50.914 [2024-11-20 15:25:36.657286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63435 ] 00:13:50.914 [2024-11-20 15:25:36.827533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.173 [2024-11-20 15:25:36.950151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.173 [2024-11-20 15:25:36.950190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.110 Running I/O for 5 seconds... 00:13:57.231 2300.00 IOPS, 143.75 MiB/s [2024-11-20T15:25:43.756Z] 3149.00 IOPS, 196.81 MiB/s [2024-11-20T15:25:43.756Z] 3602.00 IOPS, 225.12 MiB/s 00:13:57.798 Latency(us) 00:13:57.798 [2024-11-20T15:25:43.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.798 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.798 Verification LBA range: start 0x0 length 0xbd0b 00:13:57.798 Nvme0n1 : 5.74 128.14 8.01 0.00 0.00 951585.52 19723.22 954703.48 00:13:57.798 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.798 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:57.798 Nvme0n1 : 5.72 119.49 7.47 0.00 0.00 1017803.10 34952.53 1509949.44 00:13:57.798 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.798 Verification LBA range: start 0x0 length 0x4ff8 00:13:57.798 Nvme1n1p1 : 5.80 133.15 8.32 0.00 0.00 905637.02 84884.72 814893.35 00:13:57.798 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.798 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:57.798 Nvme1n1p1 : 5.78 125.15 7.82 0.00 0.00 966477.26 97367.77 1549895.19 00:13:57.798 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.798 Verification LBA range: start 0x0 length 0x4ff7 00:13:57.799 Nvme1n1p2 : 5.74 130.97 8.19 0.00 0.00 901684.06 92374.55 1150437.67 00:13:57.799 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:57.799 Nvme1n1p2 : 5.78 132.89 8.31 0.00 0.00 887324.93 97867.09 934730.61 00:13:57.799 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x0 length 0x8000 00:13:57.799 Nvme2n1 : 5.80 128.76 8.05 0.00 0.00 898039.14 57422.02 1693699.90 00:13:57.799 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x8000 length 0x8000 00:13:57.799 Nvme2n1 : 5.73 134.03 8.38 0.00 0.00 866052.39 96868.45 958698.06 00:13:57.799 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x0 length 0x8000 00:13:57.799 Nvme2n2 : 5.84 134.67 8.42 0.00 0.00 840691.17 14480.34 1709678.20 00:13:57.799 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x8000 length 0x8000 00:13:57.799 Nvme2n2 : 5.81 143.29 8.96 0.00 0.00 797369.95 25215.76 970681.78 00:13:57.799 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x0 length 0x8000 00:13:57.799 Nvme2n3 : 5.85 139.57 8.72 0.00 0.00 791178.92 17476.27 1741634.80 00:13:57.799 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x8000 length 0x8000 00:13:57.799 Nvme2n3 : 5.82 148.63 9.29 0.00 0.00 752090.41 6834.47 990654.66 00:13:57.799 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x0 length 0x2000 00:13:57.799 Nvme3n1 : 5.89 160.72 10.05 0.00 0.00 673063.05 7052.92 1765602.26 00:13:57.799 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.799 Verification LBA range: start 0x2000 length 0x2000 00:13:57.799 Nvme3n1 : 5.82 153.56 9.60 0.00 0.00 710242.55 3386.03 1010627.54 00:13:57.799 [2024-11-20T15:25:43.757Z] =================================================================================================================== 00:13:57.799 [2024-11-20T15:25:43.757Z] Total : 1913.03 119.56 0.00 0.00 846043.39 3386.03 1765602.26 00:14:00.335 00:14:00.335 real 0m9.166s 00:14:00.335 user 0m17.119s 00:14:00.335 sys 0m0.353s 00:14:00.335 15:25:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.335 15:25:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.335 ************************************ 00:14:00.335 END TEST bdev_verify_big_io 00:14:00.335 ************************************ 00:14:00.335 15:25:45 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.335 15:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:00.335 15:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.335 15:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:00.335 ************************************ 00:14:00.335 START TEST bdev_write_zeroes 00:14:00.335 ************************************ 00:14:00.335 15:25:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.335 [2024-11-20 15:25:45.884098] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:00.335 [2024-11-20 15:25:45.884284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:14:00.335 [2024-11-20 15:25:46.053583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.335 [2024-11-20 15:25:46.174320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.271 Running I/O for 1 seconds... 00:14:02.206 49994.00 IOPS, 195.29 MiB/s 00:14:02.206 Latency(us) 00:14:02.206 [2024-11-20T15:25:48.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.206 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme0n1 : 1.04 6982.71 27.28 0.00 0.00 18279.15 12170.97 59419.31 00:14:02.206 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme1n1p1 : 1.04 7084.76 27.67 0.00 0.00 17987.02 12670.29 41943.04 00:14:02.206 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme1n1p2 : 1.04 7073.43 27.63 0.00 0.00 17950.16 12420.63 42941.68 00:14:02.206 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme2n1 : 1.04 7062.73 27.59 0.00 0.00 17818.50 12420.63 42692.02 00:14:02.206 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme2n2 : 1.04 7052.21 27.55 0.00 0.00 17771.70 10673.01 42941.68 00:14:02.206 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme2n3 : 1.05 7041.74 27.51 0.00 0.00 17736.95 8675.72 34453.21 00:14:02.206 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:02.206 Nvme3n1 : 1.05 6970.14 27.23 0.00 0.00 17872.90 12170.97 34453.21 00:14:02.206 [2024-11-20T15:25:48.164Z] =================================================================================================================== 00:14:02.206 [2024-11-20T15:25:48.164Z] Total : 49267.72 192.45 0.00 0.00 17915.85 8675.72 59419.31 00:14:03.583 00:14:03.583 real 0m3.395s 00:14:03.583 user 0m3.013s 00:14:03.583 sys 0m0.264s 00:14:03.583 15:25:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.583 15:25:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:03.583 ************************************ 00:14:03.583 END TEST bdev_write_zeroes 00:14:03.583 ************************************ 00:14:03.583 15:25:49 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.583 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:03.583 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.583 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:03.583 ************************************ 00:14:03.583 START TEST bdev_json_nonenclosed 00:14:03.583 ************************************ 00:14:03.583 15:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.583 [2024-11-20 15:25:49.373228] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:03.583 [2024-11-20 15:25:49.373418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63610 ] 00:14:03.842 [2024-11-20 15:25:49.566367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.842 [2024-11-20 15:25:49.685914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.842 [2024-11-20 15:25:49.686021] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:03.842 [2024-11-20 15:25:49.686044] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:03.842 [2024-11-20 15:25:49.686057] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.100 00:14:04.100 real 0m0.696s 00:14:04.100 user 0m0.420s 00:14:04.100 sys 0m0.170s 00:14:04.100 15:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.100 ************************************ 00:14:04.100 END TEST bdev_json_nonenclosed 00:14:04.100 ************************************ 00:14:04.100 15:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:04.100 15:25:49 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.100 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:04.100 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.100 15:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:04.100 ************************************ 00:14:04.100 START TEST bdev_json_nonarray 00:14:04.100 ************************************ 00:14:04.100 15:25:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.358 [2024-11-20 15:25:50.109463] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:04.358 [2024-11-20 15:25:50.109617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63635 ] 00:14:04.358 [2024-11-20 15:25:50.282296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.616 [2024-11-20 15:25:50.400442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.616 [2024-11-20 15:25:50.400551] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:04.616 [2024-11-20 15:25:50.400585] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:04.616 [2024-11-20 15:25:50.400598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.875 00:14:04.875 real 0m0.668s 00:14:04.875 user 0m0.410s 00:14:04.875 sys 0m0.151s 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.875 ************************************ 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:04.875 END TEST bdev_json_nonarray 00:14:04.875 ************************************ 00:14:04.875 15:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:14:04.875 15:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:14:04.875 15:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:04.875 15:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.875 15:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.875 15:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:04.875 ************************************ 00:14:04.875 START TEST bdev_gpt_uuid 00:14:04.875 ************************************ 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63666 00:14:04.875 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63666 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63666 ']' 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.876 15:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:05.135 [2024-11-20 15:25:50.836884] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:05.135 [2024-11-20 15:25:50.837031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63666 ] 00:14:05.135 [2024-11-20 15:25:51.010174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.394 [2024-11-20 15:25:51.132269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.331 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.331 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:14:06.331 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:06.331 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.331 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:06.590 Some configs were skipped because the RPC state that can call them passed over. 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.590 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:14:06.590 { 00:14:06.590 "name": "Nvme1n1p1", 00:14:06.590 "aliases": [ 00:14:06.590 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:06.590 ], 00:14:06.590 "product_name": "GPT Disk", 00:14:06.590 "block_size": 4096, 00:14:06.590 "num_blocks": 655104, 00:14:06.590 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:06.590 "assigned_rate_limits": { 00:14:06.590 "rw_ios_per_sec": 0, 00:14:06.590 "rw_mbytes_per_sec": 0, 00:14:06.590 "r_mbytes_per_sec": 0, 00:14:06.590 "w_mbytes_per_sec": 0 00:14:06.590 }, 00:14:06.590 "claimed": false, 00:14:06.590 "zoned": false, 00:14:06.590 "supported_io_types": { 00:14:06.590 "read": true, 00:14:06.590 "write": true, 00:14:06.590 "unmap": true, 00:14:06.590 "flush": true, 00:14:06.590 "reset": true, 00:14:06.590 "nvme_admin": false, 00:14:06.590 "nvme_io": false, 00:14:06.590 "nvme_io_md": false, 00:14:06.590 "write_zeroes": true, 00:14:06.590 "zcopy": false, 00:14:06.590 "get_zone_info": false, 00:14:06.590 "zone_management": false, 00:14:06.590 "zone_append": false, 00:14:06.590 "compare": true, 00:14:06.590 "compare_and_write": false, 00:14:06.590 "abort": true, 00:14:06.590 "seek_hole": false, 00:14:06.590 "seek_data": false, 00:14:06.590 "copy": true, 00:14:06.590 "nvme_iov_md": false 00:14:06.590 }, 00:14:06.590 "driver_specific": { 00:14:06.590 "gpt": { 00:14:06.590 "base_bdev": "Nvme1n1", 00:14:06.590 "offset_blocks": 256, 00:14:06.590 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:06.590 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:06.590 "partition_name": "SPDK_TEST_first" 00:14:06.590 } 00:14:06.591 } 00:14:06.591 } 00:14:06.591 ]' 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.591 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:14:06.850 { 00:14:06.850 "name": "Nvme1n1p2", 00:14:06.850 "aliases": [ 00:14:06.850 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:06.850 ], 00:14:06.850 "product_name": "GPT Disk", 00:14:06.850 "block_size": 4096, 00:14:06.850 "num_blocks": 655103, 00:14:06.850 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:06.850 "assigned_rate_limits": { 00:14:06.850 "rw_ios_per_sec": 0, 00:14:06.850 "rw_mbytes_per_sec": 0, 00:14:06.850 "r_mbytes_per_sec": 0, 00:14:06.850 "w_mbytes_per_sec": 0 00:14:06.850 }, 00:14:06.850 "claimed": false, 00:14:06.850 "zoned": false, 00:14:06.850 "supported_io_types": { 00:14:06.850 "read": true, 00:14:06.850 "write": true, 00:14:06.850 "unmap": true, 00:14:06.850 "flush": true, 00:14:06.850 "reset": true, 00:14:06.850 "nvme_admin": false, 00:14:06.850 "nvme_io": false, 00:14:06.850 "nvme_io_md": false, 00:14:06.850 "write_zeroes": true, 00:14:06.850 "zcopy": false, 00:14:06.850 "get_zone_info": false, 00:14:06.850 "zone_management": false, 00:14:06.850 "zone_append": false, 00:14:06.850 "compare": true, 00:14:06.850 "compare_and_write": false, 00:14:06.850 "abort": true, 00:14:06.850 "seek_hole": false, 00:14:06.850 "seek_data": false, 00:14:06.850 "copy": true, 00:14:06.850 "nvme_iov_md": false 00:14:06.850 }, 00:14:06.850 "driver_specific": { 00:14:06.850 "gpt": { 00:14:06.850 "base_bdev": "Nvme1n1", 00:14:06.850 "offset_blocks": 655360, 00:14:06.850 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:06.850 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:06.850 "partition_name": "SPDK_TEST_second" 00:14:06.850 } 00:14:06.850 } 00:14:06.850 } 00:14:06.850 ]' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63666 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63666 ']' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63666 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63666 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.850 killing process with pid 63666 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63666' 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63666 00:14:06.850 15:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63666 00:14:09.385 00:14:09.385 real 0m4.470s 00:14:09.385 user 0m4.587s 00:14:09.385 sys 0m0.564s 00:14:09.385 15:25:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.385 15:25:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:09.385 ************************************ 00:14:09.385 END TEST bdev_gpt_uuid 00:14:09.385 ************************************ 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:14:09.385 15:25:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:09.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.953 Waiting for block devices as requested 00:14:09.953 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:10.212 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:10.213 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:10.471 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:15.739 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:15.739 15:26:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:14:15.739 15:26:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:14:15.739 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:15.739 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:15.739 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:15.739 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:15.739 15:26:01 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:14:15.740 00:14:15.740 real 1m8.334s 00:14:15.740 user 1m26.168s 00:14:15.740 sys 0m12.943s 00:14:15.740 15:26:01 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.740 15:26:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:15.740 ************************************ 00:14:15.740 END TEST blockdev_nvme_gpt 00:14:15.740 ************************************ 00:14:15.740 15:26:01 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:15.740 15:26:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.740 15:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.740 15:26:01 -- common/autotest_common.sh@10 -- # set +x 00:14:15.740 ************************************ 00:14:15.740 START TEST nvme 00:14:15.740 ************************************ 00:14:15.740 15:26:01 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:15.998 * Looking for test storage... 00:14:15.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.998 15:26:01 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.998 15:26:01 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.998 15:26:01 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.998 15:26:01 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.998 15:26:01 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.998 15:26:01 nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:15.998 15:26:01 nvme -- scripts/common.sh@345 -- # : 1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.998 15:26:01 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.998 15:26:01 nvme -- scripts/common.sh@365 -- # decimal 1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@353 -- # local d=1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.998 15:26:01 nvme -- scripts/common.sh@355 -- # echo 1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.998 15:26:01 nvme -- scripts/common.sh@366 -- # decimal 2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@353 -- # local d=2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.998 15:26:01 nvme -- scripts/common.sh@355 -- # echo 2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.998 15:26:01 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.998 15:26:01 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.998 15:26:01 nvme -- scripts/common.sh@368 -- # return 0 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.998 --rc genhtml_branch_coverage=1 00:14:15.998 --rc genhtml_function_coverage=1 00:14:15.998 --rc genhtml_legend=1 00:14:15.998 --rc geninfo_all_blocks=1 00:14:15.998 --rc geninfo_unexecuted_blocks=1 00:14:15.998 00:14:15.998 ' 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.998 --rc genhtml_branch_coverage=1 00:14:15.998 --rc genhtml_function_coverage=1 00:14:15.998 --rc genhtml_legend=1 00:14:15.998 --rc geninfo_all_blocks=1 00:14:15.998 --rc geninfo_unexecuted_blocks=1 00:14:15.998 00:14:15.998 ' 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.998 --rc genhtml_branch_coverage=1 00:14:15.998 --rc genhtml_function_coverage=1 00:14:15.998 --rc genhtml_legend=1 00:14:15.998 --rc geninfo_all_blocks=1 00:14:15.998 --rc geninfo_unexecuted_blocks=1 00:14:15.998 00:14:15.998 ' 00:14:15.998 15:26:01 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.998 --rc genhtml_branch_coverage=1 00:14:15.998 --rc genhtml_function_coverage=1 00:14:15.998 --rc genhtml_legend=1 00:14:15.998 --rc geninfo_all_blocks=1 00:14:15.998 --rc geninfo_unexecuted_blocks=1 00:14:15.998 00:14:15.998 ' 00:14:15.998 15:26:01 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:16.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.501 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.501 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.501 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.501 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.501 15:26:03 nvme -- nvme/nvme.sh@79 -- # uname 00:14:17.501 15:26:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:14:17.502 15:26:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:14:17.502 15:26:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1075 -- # stubpid=64325 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:14:17.502 Waiting for stub to ready for secondary processes... 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64325 ]] 00:14:17.502 15:26:03 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:17.502 [2024-11-20 15:26:03.408840] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:17.502 [2024-11-20 15:26:03.409212] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:14:18.438 15:26:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:18.438 15:26:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64325 ]] 00:14:18.438 15:26:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:18.697 [2024-11-20 15:26:04.497743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.956 [2024-11-20 15:26:04.667525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.956 [2024-11-20 15:26:04.667672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.956 [2024-11-20 15:26:04.667700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.956 [2024-11-20 15:26:04.693902] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:14:18.956 [2024-11-20 15:26:04.693950] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:18.956 [2024-11-20 15:26:04.711415] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:18.956 [2024-11-20 15:26:04.711659] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:18.956 [2024-11-20 15:26:04.716324] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:18.956 [2024-11-20 15:26:04.716660] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:14:18.956 [2024-11-20 15:26:04.716782] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:14:18.956 [2024-11-20 15:26:04.721668] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:18.956 [2024-11-20 15:26:04.721970] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:14:18.956 [2024-11-20 15:26:04.722059] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:14:18.956 [2024-11-20 15:26:04.725628] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:18.956 [2024-11-20 15:26:04.725888] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:14:18.956 [2024-11-20 15:26:04.725985] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:14:18.956 [2024-11-20 15:26:04.726059] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:14:18.956 [2024-11-20 15:26:04.726138] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:14:19.523 done. 00:14:19.523 15:26:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:19.523 15:26:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:14:19.523 15:26:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:19.523 15:26:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:14:19.523 15:26:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.523 15:26:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.523 ************************************ 00:14:19.523 START TEST nvme_reset 00:14:19.523 ************************************ 00:14:19.523 15:26:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:20.090 Initializing NVMe Controllers 00:14:20.090 Skipping QEMU NVMe SSD at 0000:00:10.0 00:14:20.091 Skipping QEMU NVMe SSD at 0000:00:11.0 00:14:20.091 Skipping QEMU NVMe SSD at 0000:00:13.0 00:14:20.091 Skipping QEMU NVMe SSD at 0000:00:12.0 00:14:20.091 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:14:20.091 00:14:20.091 real 0m0.424s 00:14:20.091 user 0m0.156s 00:14:20.091 sys 0m0.217s 00:14:20.091 15:26:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.091 15:26:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:14:20.091 ************************************ 00:14:20.091 END TEST nvme_reset 00:14:20.091 ************************************ 00:14:20.091 15:26:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:14:20.091 15:26:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:20.091 15:26:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.091 15:26:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.091 ************************************ 00:14:20.091 START TEST nvme_identify 00:14:20.091 ************************************ 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:14:20.091 15:26:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:14:20.091 15:26:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:14:20.091 15:26:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:14:20.091 15:26:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:20.091 15:26:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:20.091 15:26:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:14:20.353 [2024-11-20 15:26:06.264512] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64355 terminated unexpected 00:14:20.353 ===================================================== 00:14:20.353 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:20.354 ===================================================== 00:14:20.354 Controller Capabilities/Features 00:14:20.354 ================================ 00:14:20.354 Vendor ID: 1b36 00:14:20.354 Subsystem Vendor ID: 1af4 00:14:20.354 Serial Number: 12340 00:14:20.354 Model Number: QEMU NVMe Ctrl 00:14:20.354 Firmware Version: 8.0.0 00:14:20.354 Recommended Arb Burst: 6 00:14:20.354 IEEE OUI Identifier: 00 54 52 00:14:20.354 Multi-path I/O 00:14:20.354 May have multiple subsystem ports: No 00:14:20.354 May have multiple controllers: No 00:14:20.354 Associated with SR-IOV VF: No 00:14:20.354 Max Data Transfer Size: 524288 00:14:20.354 Max Number of Namespaces: 256 00:14:20.354 Max Number of I/O Queues: 64 00:14:20.354 NVMe Specification Version (VS): 1.4 00:14:20.354 NVMe Specification Version (Identify): 1.4 00:14:20.354 Maximum Queue Entries: 2048 00:14:20.354 Contiguous Queues Required: Yes 00:14:20.354 Arbitration Mechanisms Supported 00:14:20.354 Weighted Round Robin: Not Supported 00:14:20.354 Vendor Specific: Not Supported 00:14:20.354 Reset Timeout: 7500 ms 00:14:20.354 Doorbell Stride: 4 bytes 00:14:20.354 NVM Subsystem Reset: Not Supported 00:14:20.354 Command Sets Supported 00:14:20.354 NVM Command Set: Supported 00:14:20.354 Boot Partition: Not Supported 00:14:20.354 Memory Page Size Minimum: 4096 bytes 00:14:20.354 Memory Page Size Maximum: 65536 bytes 00:14:20.354 Persistent Memory Region: Not Supported 00:14:20.354 Optional Asynchronous Events Supported 00:14:20.354 Namespace Attribute Notices: Supported 00:14:20.354 Firmware Activation Notices: Not Supported 00:14:20.354 ANA Change Notices: Not Supported 00:14:20.354 PLE Aggregate Log Change Notices: Not Supported 00:14:20.354 LBA Status Info Alert Notices: Not Supported 00:14:20.354 EGE Aggregate Log Change Notices: Not Supported 00:14:20.354 Normal NVM Subsystem Shutdown event: Not Supported 00:14:20.354 Zone Descriptor Change Notices: Not Supported 00:14:20.354 Discovery Log Change Notices: Not Supported 00:14:20.354 Controller Attributes 00:14:20.354 128-bit Host Identifier: Not Supported 00:14:20.354 Non-Operational Permissive Mode: Not Supported 00:14:20.354 NVM Sets: Not Supported 00:14:20.354 Read Recovery Levels: Not Supported 00:14:20.354 Endurance Groups: Not Supported 00:14:20.354 Predictable Latency Mode: Not Supported 00:14:20.354 Traffic Based Keep ALive: Not Supported 00:14:20.354 Namespace Granularity: Not Supported 00:14:20.354 SQ Associations: Not Supported 00:14:20.354 UUID List: Not Supported 00:14:20.354 Multi-Domain Subsystem: Not Supported 00:14:20.354 Fixed Capacity Management: Not Supported 00:14:20.354 Variable Capacity Management: Not Supported 00:14:20.354 Delete Endurance Group: Not Supported 00:14:20.354 Delete NVM Set: Not Supported 00:14:20.354 Extended LBA Formats Supported: Supported 00:14:20.354 Flexible Data Placement Supported: Not Supported 00:14:20.354 00:14:20.354 Controller Memory Buffer Support 00:14:20.354 ================================ 00:14:20.354 Supported: No 00:14:20.354 00:14:20.354 Persistent Memory Region Support 00:14:20.354 ================================ 00:14:20.354 Supported: No 00:14:20.354 00:14:20.354 Admin Command Set Attributes 00:14:20.354 ============================ 00:14:20.354 Security Send/Receive: Not Supported 00:14:20.354 Format NVM: Supported 00:14:20.354 Firmware Activate/Download: Not Supported 00:14:20.354 Namespace Management: Supported 00:14:20.354 Device Self-Test: Not Supported 00:14:20.354 Directives: Supported 00:14:20.354 NVMe-MI: Not Supported 00:14:20.354 Virtualization Management: Not Supported 00:14:20.354 Doorbell Buffer Config: Supported 00:14:20.354 Get LBA Status Capability: Not Supported 00:14:20.354 Command & Feature Lockdown Capability: Not Supported 00:14:20.354 Abort Command Limit: 4 00:14:20.354 Async Event Request Limit: 4 00:14:20.354 Number of Firmware Slots: N/A 00:14:20.354 Firmware Slot 1 Read-Only: N/A 00:14:20.354 Firmware Activation Without Reset: N/A 00:14:20.354 Multiple Update Detection Support: N/A 00:14:20.354 Firmware Update Granularity: No Information Provided 00:14:20.354 Per-Namespace SMART Log: Yes 00:14:20.354 Asymmetric Namespace Access Log Page: Not Supported 00:14:20.354 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:20.354 Command Effects Log Page: Supported 00:14:20.354 Get Log Page Extended Data: Supported 00:14:20.354 Telemetry Log Pages: Not Supported 00:14:20.354 Persistent Event Log Pages: Not Supported 00:14:20.354 Supported Log Pages Log Page: May Support 00:14:20.354 Commands Supported & Effects Log Page: Not Supported 00:14:20.354 Feature Identifiers & Effects Log Page:May Support 00:14:20.354 NVMe-MI Commands & Effects Log Page: May Support 00:14:20.354 Data Area 4 for Telemetry Log: Not Supported 00:14:20.354 Error Log Page Entries Supported: 1 00:14:20.354 Keep Alive: Not Supported 00:14:20.354 00:14:20.354 NVM Command Set Attributes 00:14:20.354 ========================== 00:14:20.354 Submission Queue Entry Size 00:14:20.354 Max: 64 00:14:20.354 Min: 64 00:14:20.354 Completion Queue Entry Size 00:14:20.354 Max: 16 00:14:20.354 Min: 16 00:14:20.354 Number of Namespaces: 256 00:14:20.354 Compare Command: Supported 00:14:20.354 Write Uncorrectable Command: Not Supported 00:14:20.354 Dataset Management Command: Supported 00:14:20.354 Write Zeroes Command: Supported 00:14:20.354 Set Features Save Field: Supported 00:14:20.354 Reservations: Not Supported 00:14:20.354 Timestamp: Supported 00:14:20.354 Copy: Supported 00:14:20.354 Volatile Write Cache: Present 00:14:20.354 Atomic Write Unit (Normal): 1 00:14:20.354 Atomic Write Unit (PFail): 1 00:14:20.354 Atomic Compare & Write Unit: 1 00:14:20.354 Fused Compare & Write: Not Supported 00:14:20.354 Scatter-Gather List 00:14:20.354 SGL Command Set: Supported 00:14:20.354 SGL Keyed: Not Supported 00:14:20.354 SGL Bit Bucket Descriptor: Not Supported 00:14:20.354 SGL Metadata Pointer: Not Supported 00:14:20.354 Oversized SGL: Not Supported 00:14:20.354 SGL Metadata Address: Not Supported 00:14:20.354 SGL Offset: Not Supported 00:14:20.354 Transport SGL Data Block: Not Supported 00:14:20.354 Replay Protected Memory Block: Not Supported 00:14:20.354 00:14:20.354 Firmware Slot Information 00:14:20.354 ========================= 00:14:20.354 Active slot: 1 00:14:20.354 Slot 1 Firmware Revision: 1.0 00:14:20.354 00:14:20.354 00:14:20.354 Commands Supported and Effects 00:14:20.354 ============================== 00:14:20.354 Admin Commands 00:14:20.354 -------------- 00:14:20.354 Delete I/O Submission Queue (00h): Supported 00:14:20.354 Create I/O Submission Queue (01h): Supported 00:14:20.354 Get Log Page (02h): Supported 00:14:20.354 Delete I/O Completion Queue (04h): Supported 00:14:20.354 Create I/O Completion Queue (05h): Supported 00:14:20.354 Identify (06h): Supported 00:14:20.354 Abort (08h): Supported 00:14:20.354 Set Features (09h): Supported 00:14:20.354 Get Features (0Ah): Supported 00:14:20.354 Asynchronous Event Request (0Ch): Supported 00:14:20.354 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:20.354 Directive Send (19h): Supported 00:14:20.354 Directive Receive (1Ah): Supported 00:14:20.354 Virtualization Management (1Ch): Supported 00:14:20.354 Doorbell Buffer Config (7Ch): Supported 00:14:20.354 Format NVM (80h): Supported LBA-Change 00:14:20.354 I/O Commands 00:14:20.354 ------------ 00:14:20.354 Flush (00h): Supported LBA-Change 00:14:20.354 Write (01h): Supported LBA-Change 00:14:20.354 Read (02h): Supported 00:14:20.354 Compare (05h): Supported 00:14:20.354 Write Zeroes (08h): Supported LBA-Change 00:14:20.354 Dataset Management (09h): Supported LBA-Change 00:14:20.354 Unknown (0Ch): Supported 00:14:20.354 Unknown (12h): Supported 00:14:20.354 Copy (19h): Supported LBA-Change 00:14:20.354 Unknown (1Dh): Supported LBA-Change 00:14:20.354 00:14:20.354 Error Log 00:14:20.354 ========= 00:14:20.354 00:14:20.354 Arbitration 00:14:20.354 =========== 00:14:20.354 Arbitration Burst: no limit 00:14:20.354 00:14:20.354 Power Management 00:14:20.354 ================ 00:14:20.354 Number of Power States: 1 00:14:20.354 Current Power State: Power State #0 00:14:20.354 Power State #0: 00:14:20.354 Max Power: 25.00 W 00:14:20.354 Non-Operational State: Operational 00:14:20.355 Entry Latency: 16 microseconds 00:14:20.355 Exit Latency: 4 microseconds 00:14:20.355 Relative Read Throughput: 0 00:14:20.355 Relative Read Latency: 0 00:14:20.355 Relative Write Throughput: 0 00:14:20.355 Relative Write Latency: 0 00:14:20.355 Idle Power: Not Reported 00:14:20.355 Active Power: Not Reported 00:14:20.355 Non-Operational Permissive Mode: Not Supported 00:14:20.355 00:14:20.355 Health Information 00:14:20.355 ================== 00:14:20.355 Critical Warnings: 00:14:20.355 Available Spare Space: OK 00:14:20.355 Temperature: OK 00:14:20.355 Device Reliability: OK 00:14:20.355 Read Only: No 00:14:20.355 Volatile Memory Backup: OK 00:14:20.355 Current Temperature: 323 Kelvin (50 Celsius) 00:14:20.355 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:20.355 Available Spare: 0% 00:14:20.355 Available Spare Threshold: 0% 00:14:20.355 Life Percentage Used: 0% 00:14:20.355 Data Units Read: 709 00:14:20.355 Data Units Written: 637 00:14:20.355 Host Read Commands: 32435 00:14:20.355 Host Write Commands: 32221 00:14:20.355 Controller Busy Time: 0 minutes 00:14:20.355 Power Cycles: 0 00:14:20.355 Power On Hours: 0 hours 00:14:20.355 Unsafe Shutdowns: 0 00:14:20.355 Unrecoverable Media Errors: 0 00:14:20.355 Lifetime Error Log Entries: 0 00:14:20.355 Warning Temperature Time: 0 minutes 00:14:20.355 Critical Temperature Time: 0 minutes 00:14:20.355 00:14:20.355 Number of Queues 00:14:20.355 ================ 00:14:20.355 Number of I/O Submission Queues: 64 00:14:20.355 Number of I/O Completion Queues: 64 00:14:20.355 00:14:20.355 ZNS Specific Controller Data 00:14:20.355 ============================ 00:14:20.355 Zone Append Size Limit: 0 00:14:20.355 00:14:20.355 00:14:20.355 Active Namespaces 00:14:20.355 ================= 00:14:20.355 Namespace ID:1 00:14:20.355 Error Recovery Timeout: Unlimited 00:14:20.355 Command Set Identifier: NVM (00h) 00:14:20.355 Deallocate: Supported 00:14:20.355 Deallocated/Unwritten Error: Supported 00:14:20.355 Deallocated Read Value: All 0x00 00:14:20.355 Deallocate in Write Zeroes: Not Supported 00:14:20.355 Deallocated Guard Field: 0xFFFF 00:14:20.355 Flush: Supported 00:14:20.355 Reservation: Not Supported 00:14:20.355 Metadata Transferred as: Separate Metadata Buffer 00:14:20.355 Namespace Sharing Capabilities: Private 00:14:20.355 Size (in LBAs): 1548666 (5GiB) 00:14:20.355 Capacity (in LBAs): 1548666 (5GiB) 00:14:20.355 Utilization (in LBAs): 1548666 (5GiB) 00:14:20.355 Thin Provisioning: Not Supported 00:14:20.355 Per-NS Atomic Units: No 00:14:20.355 Maximum Single Source Range Length: 128 00:14:20.355 Maximum Copy Length: 128 00:14:20.355 Maximum Source Range Count: 128 00:14:20.355 NGUID/EUI64 Never Reused: No 00:14:20.355 Namespace Write Protected: No 00:14:20.355 Number of LBA Formats: 8 00:14:20.355 Current LBA Format: LBA Format #07 00:14:20.355 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.355 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.355 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.355 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.355 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.355 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.355 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.355 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.355 00:14:20.355 NVM Specific Namespace Data 00:14:20.355 =========================== 00:14:20.355 Logical Block Storage Tag Mask: 0 00:14:20.355 Protection Information Capabilities: 00:14:20.355 16b Guard Protection Information Storage Tag Support: No 00:14:20.355 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.355 Storage Tag Check Read Support: No 00:14:20.355 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.355 ===================================================== 00:14:20.355 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:20.355 ===================================================== 00:14:20.355 Controller Capabilities/Features 00:14:20.355 ================================ 00:14:20.355 Vendor ID: 1b36 00:14:20.355 Subsystem Vendor ID: 1af4 00:14:20.355 Serial Number: 12341 00:14:20.355 Model Number: QEMU NVMe Ctrl 00:14:20.355 Firmware Version: 8.0.0 00:14:20.355 Recommended Arb Burst: 6 00:14:20.355 IEEE OUI Identifier: 00 54 52 00:14:20.355 Multi-path I/O 00:14:20.355 May have multiple subsystem ports: No 00:14:20.355 May have multiple controllers: No 00:14:20.355 Associated with SR-IOV VF: No 00:14:20.355 Max Data Transfer Size: 524288 00:14:20.355 Max Number of Namespaces: 256 00:14:20.355 Max Number of I/O Queues: 64 00:14:20.355 NVMe Specification Version (VS): 1.4 00:14:20.355 NVMe Specification Version (Identify): 1.4 00:14:20.355 Maximum Queue Entries: 2048 00:14:20.355 Contiguous Queues Required: Yes 00:14:20.355 Arbitration Mechanisms Supported 00:14:20.355 Weighted Round Robin: Not Supported 00:14:20.355 Vendor Specific: Not Supported 00:14:20.355 Reset Timeout: 7500 ms 00:14:20.355 Doorbell Stride: 4 bytes 00:14:20.355 NVM Subsystem Reset: Not Supported 00:14:20.355 Command Sets Supported 00:14:20.355 NVM Command Set: Supported 00:14:20.355 Boot Partition: Not Supported 00:14:20.355 Memory Page Size Minimum: 4096 bytes 00:14:20.355 Memory Page Size Maximum: 65536 bytes 00:14:20.355 Persistent Memory Region: Not Supported 00:14:20.355 Optional Asynchronous Events Supported 00:14:20.355 Namespace Attribute Notices: Supported 00:14:20.355 Firmware Activation Notices: Not Supported 00:14:20.355 ANA Change Notices: Not Supported 00:14:20.355 PLE Aggregate Log Change Notices: Not Supported 00:14:20.355 LBA Status Info Alert Notices: Not Supported 00:14:20.355 EGE Aggregate Log Change Notices: Not Supported 00:14:20.355 Normal NVM Subsystem Shutdown event: Not Supported 00:14:20.355 Zone Descriptor Change Notices: Not Supported 00:14:20.355 Discovery Log Change Notices: Not Supported 00:14:20.355 Controller Attributes 00:14:20.355 128-bit Host Identifier: Not Supported 00:14:20.355 Non-Operational Permissive Mode: Not Supported 00:14:20.355 NVM Sets: Not Supported 00:14:20.355 Read Recovery Levels: Not Supported 00:14:20.355 Endurance Groups: Not Supported 00:14:20.355 Predictable Latency Mode: Not Supported 00:14:20.355 Traffic Based Keep ALive: Not Supported 00:14:20.355 Namespace Granularity: Not Supported 00:14:20.355 SQ Associations: Not Supported 00:14:20.355 UUID List: Not Supported 00:14:20.355 Multi-Domain Subsystem: Not Supported 00:14:20.355 Fixed Capacity Management: Not Supported 00:14:20.355 Variable Capacity Management: Not Supported 00:14:20.355 Delete Endurance Group: Not Supported 00:14:20.355 Delete NVM Set: Not Supported 00:14:20.355 Extended LBA Formats Supported: Supported 00:14:20.355 Flexible Data Placement Supported: Not Supported 00:14:20.355 00:14:20.355 Controller Memory Buffer Support 00:14:20.355 ================================ 00:14:20.355 Supported: No 00:14:20.355 00:14:20.355 Persistent Memory Region Support 00:14:20.355 ================================ 00:14:20.355 Supported: No 00:14:20.355 00:14:20.355 Admin Command Set Attributes 00:14:20.355 ============================ 00:14:20.355 Security Send/Receive: Not Supported 00:14:20.355 Format NVM: Supported 00:14:20.355 Firmware Activate/Download: Not Supported 00:14:20.355 Namespace Management: Supported 00:14:20.355 Device Self-Test: Not Supported 00:14:20.355 Directives: Supported 00:14:20.355 NVMe-MI: Not Supported 00:14:20.355 Virtualization Management: Not Supported 00:14:20.355 Doorbell Buffer Config: Supported 00:14:20.355 Get LBA Status Capability: Not Supported 00:14:20.355 Command & Feature Lockdown Capability: Not Supported 00:14:20.355 Abort Command Limit: 4 00:14:20.355 Async Event Request Limit: 4 00:14:20.355 Number of Firmware Slots: N/A 00:14:20.355 Firmware Slot 1 Read-Only: N/A 00:14:20.355 Firmware Activation Without Reset: N/A 00:14:20.355 Multiple Update Detection Support: N/A 00:14:20.355 Firmware Update Granularity: No Information Provided 00:14:20.355 Per-Namespace SMART Log: Yes 00:14:20.356 Asymmetric Namespace Access Log Page: Not Supported 00:14:20.356 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:20.356 Command Effects Log Page: Supported 00:14:20.356 Get Log Page Extended Data: Supported 00:14:20.356 Telemetry Log Pages: Not Supported 00:14:20.356 Persistent Event Log Pages: Not Supported 00:14:20.356 Supported Log Pages Log Page: May Support 00:14:20.356 Commands Supported & Effects Log Page: Not Supported 00:14:20.356 Feature Identifiers & Effects Log Page:May Support 00:14:20.356 NVMe-MI Commands & Effects Log Page: May Support 00:14:20.356 Data Area 4 for Telemetry Log: Not Supported 00:14:20.356 Error Log Page Entries Supported: 1 00:14:20.356 Keep Alive: Not Supported 00:14:20.356 00:14:20.356 NVM Command Set Attributes 00:14:20.356 ========================== 00:14:20.356 Submission Queue Entry Size 00:14:20.356 Max: 64 00:14:20.356 Min: 64 00:14:20.356 Completion Queue Entry Size 00:14:20.356 Max: 16 00:14:20.356 Min: 16 00:14:20.356 Number of Namespaces: 256 00:14:20.356 Compare Command: Supported 00:14:20.356 Write Uncorrectable Command: Not Supported 00:14:20.356 Dataset Management Command: Supported 00:14:20.356 Write Zeroes Command: Supported 00:14:20.356 Set Features Save Field: Supported 00:14:20.356 Reservations: Not Supported 00:14:20.356 Timestamp: Supported 00:14:20.356 Copy: Supported 00:14:20.356 Volatile Write Cache: Present 00:14:20.356 Atomic Write Unit (Normal): 1 00:14:20.356 Atomic Write Unit (PFail): 1 00:14:20.356 Atomic Compare & Write Unit: 1 00:14:20.356 Fused Compare & Write: Not Supported 00:14:20.356 Scatter-Gather List 00:14:20.356 SGL Command Set: Supported 00:14:20.356 SGL Keyed: Not Supported 00:14:20.356 SGL Bit Bucket Descriptor: Not Supported 00:14:20.356 SGL Metadata Pointer: Not Supported 00:14:20.356 Oversized SGL: Not Supported 00:14:20.356 SGL Metadata Address: Not Supported 00:14:20.356 SGL Offset: Not Supported 00:14:20.356 Transport SGL Data Block: Not Supported 00:14:20.356 Replay Protected Memory Block: Not Supported 00:14:20.356 00:14:20.356 Firmware Slot Information 00:14:20.356 ========================= 00:14:20.356 Active slot: 1 00:14:20.356 Slot 1 Firmware Revision: 1.0 00:14:20.356 00:14:20.356 00:14:20.356 Commands Supported and Effects 00:14:20.356 ============================== 00:14:20.356 Admin Commands 00:14:20.356 -------------- 00:14:20.356 Delete I/O Submission Queue (00h): Supported 00:14:20.356 Create I/O Submission Queue (01h): Supported 00:14:20.356 Get Log Page (02h): Supported 00:14:20.356 Delete I/O Completion Queue (04h): Supported 00:14:20.356 Create I/O Completion Queue (05h): Supported 00:14:20.356 Identify (06h): Supported 00:14:20.356 Abort (08h): Supported 00:14:20.356 Set Features (09h): Supported 00:14:20.356 Get Features (0Ah): Supported 00:14:20.356 Asynchronous Event Request (0Ch): Supported 00:14:20.356 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:20.356 Directive Send (19h): Supported 00:14:20.356 Directive Receive (1Ah): Supported 00:14:20.356 Virtualization Management (1Ch): Supported 00:14:20.356 Doorbell Buffer Config (7Ch): Supported 00:14:20.356 Format NVM (80h): Supported LBA-Change 00:14:20.356 I/O Commands 00:14:20.356 ------------ 00:14:20.356 Flush (00h): Supported LBA-Change 00:14:20.356 Write (01h): Supported LBA-Change 00:14:20.356 Read (02h): Supported 00:14:20.356 Compare (05h): Supported 00:14:20.356 Write Zeroes (08h): Supported LBA-Change 00:14:20.356 Dataset Management (09h): Supported LBA-Change 00:14:20.356 Unknown (0Ch): Supported 00:14:20.356 Unknown (12h): Supported 00:14:20.356 Copy (19h): Supported LBA-Change 00:14:20.356 Unknown (1Dh): Supported LBA-Change 00:14:20.356 00:14:20.356 Error Log 00:14:20.356 ========= 00:14:20.356 00:14:20.356 Arbitration 00:14:20.356 =========== 00:14:20.356 Arbitration Burst: no limit 00:14:20.356 00:14:20.356 Power Management 00:14:20.356 ================ 00:14:20.356 Number of Power States: 1 00:14:20.356 Current Power State: Power State #0 00:14:20.356 Power State #0: 00:14:20.356 Max Power: 25.00 W 00:14:20.356 Non-Operational State: Operational 00:14:20.356 Entry Latency: 16 microseconds 00:14:20.356 Exit Latency: 4 microseconds 00:14:20.356 Relative Read Throughput: 0 00:14:20.356 Relative Read Latency: 0 00:14:20.356 Relative Write Throughput: 0 00:14:20.356 Relative Write Latency: 0 00:14:20.356 Idle Power: Not Reported 00:14:20.356 Active Power: Not Reported 00:14:20.356 Non-Operational Permissive Mode: Not Supported 00:14:20.356 00:14:20.356 Health Information 00:14:20.356 ================== 00:14:20.356 Critical Warnings: 00:14:20.356 Available Spare Space: OK 00:14:20.356 Temperature: OK 00:14:20.356 Device Reliability: OK 00:14:20.356 Read Only: No 00:14:20.356 Volatile Memory Backup: OK 00:14:20.356 Current Temperature: 323 Kelvin (50 Celsius) 00:14:20.356 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:20.356 Available Spare: 0% 00:14:20.356 Available Spare Threshold: 0% 00:14:20.356 Life Percentage Used: 0% 00:14:20.356 Data Units Read: 1073 00:14:20.356 Data Units Written: 946 00:14:20.356 Host Read Commands: 47987 00:14:20.356 Host Write Commands: 46879 00:14:20.356 Controller Busy Time: 0 minutes 00:14:20.356 Power Cycles: 0 00:14:20.356 Power On Hours: 0 hours 00:14:20.356 Unsafe Shutdowns: 0 00:14:20.356 Unrecoverable Media Errors: 0 00:14:20.356 Lifetime Error Log Entries: 0 00:14:20.356 Warning Temperature Time: 0 minutes 00:14:20.356 Critical Temperature Time: 0 minutes 00:14:20.356 00:14:20.356 Number of Queues 00:14:20.356 ================ 00:14:20.356 Number of I/O Submission Queues: 64 00:14:20.356 Number of I/O Completion Queues: 64 00:14:20.356 00:14:20.356 ZNS Specific Controller Data 00:14:20.356 ============================ 00:14:20.356 Zone Append Size Limit: 0 00:14:20.356 00:14:20.356 00:14:20.356 Active Namespaces 00:14:20.356 ================= 00:14:20.356 Namespace ID:1 00:14:20.356 Error Recovery Timeout: Unlimited 00:14:20.356 Command Set Identifier: NVM (00h) 00:14:20.356 Deallocate: Supported 00:14:20.356 Deallocated/Unwritten Error: Supported 00:14:20.356 Deallocated Read Value: All 0x00 00:14:20.356 Deallocate in Write Zeroes: Not Supported 00:14:20.356 Deallocated Guard Field: 0xFFFF 00:14:20.356 Flush: Supported 00:14:20.356 Reservation: Not Supported 00:14:20.356 Namespace Sharing Capabilities: Private 00:14:20.356 Size (in LBAs): 1310720 (5GiB) 00:14:20.356 Capacity (in LBAs): 1310720 (5GiB) 00:14:20.356 Utilization (in LBAs): 1310720 (5GiB) 00:14:20.356 Thin Provisioning: Not Supported 00:14:20.356 Per-NS Atomic Units: No 00:14:20.356 Maximum Single Source Range Length: 128 00:14:20.356 Maximum Copy Length: 128 00:14:20.356 Maximum Source Range Count: 128 00:14:20.356 NGUID/EUI64 Never Reused: No 00:14:20.356 Namespace Write Protected: No 00:14:20.356 Number of LBA Formats: 8 00:14:20.356 Current LBA Format: LBA Format #04 00:14:20.356 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.356 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.356 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.356 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.356 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.356 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.356 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.356 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.356 00:14:20.356 NVM Specific Namespace Data 00:14:20.356 =========================== 00:14:20.356 Logical Block Storage Tag Mask: 0 00:14:20.356 Protection Information Capabilities: 00:14:20.356 16b Guard Protection Information Storage Tag Support: No 00:14:20.356 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.356 Storage Tag Check Read Support: No 00:14:20.356 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.356 ===================================================== 00:14:20.356 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:20.356 ===================================================== 00:14:20.356 Controller Capabilities/Features 00:14:20.356 ================================ 00:14:20.356 Vendor ID: 1b36 00:14:20.356 Subsystem Vendor ID: 1af4 00:14:20.356 Serial Number: 12343 00:14:20.357 Model Number: QEMU NVMe Ctrl 00:14:20.357 Firmware Version: 8.0.0 00:14:20.357 Recommended Arb Burst: 6 00:14:20.357 IEEE OUI Identifier: 00 54 52 00:14:20.357 Multi-path I/O 00:14:20.357 May have multiple subsystem ports: No 00:14:20.357 May have multiple controllers: Yes 00:14:20.357 Associated with SR-IOV VF: No 00:14:20.357 Max Data Transfer Size: 524288 00:14:20.357 Max Number of Namespaces: 256 00:14:20.357 Max Number of I/O Queues: 64 00:14:20.357 NVMe Specification Version (VS): 1.4 00:14:20.357 NVMe Specification Version (Identify): 1.4 00:14:20.357 Maximum Queue Entries: 2048 00:14:20.357 Contiguous Queues Required: Yes 00:14:20.357 Arbitration Mechanisms Supported 00:14:20.357 Weighted Round Robin: Not Supported 00:14:20.357 Vendor Specific: Not Supported 00:14:20.357 Reset Timeout: 7500 ms 00:14:20.357 Doorbell Stride: 4 bytes 00:14:20.357 NVM Subsystem Reset: Not Supported 00:14:20.357 Command Sets Supported 00:14:20.357 NVM Command Set: Supported 00:14:20.357 Boot Partition: Not Supported 00:14:20.357 Memory Page Size Minimum: 4096 bytes 00:14:20.357 Memory Page Size Maximum: 65536 bytes 00:14:20.357 Persistent Memory Region: Not Supported 00:14:20.357 Optional Asynchronous Events Supported 00:14:20.357 Namespace Attribute Notices: Supported 00:14:20.357 Firmware Activation Notices: Not Supported 00:14:20.357 ANA Change Notices: Not Supported 00:14:20.357 PLE Aggregate Log Change Notices: Not Supported 00:14:20.357 LBA Status Info Alert Notices: Not Supported 00:14:20.357 EGE Aggregate Log Change Notices: Not Supported 00:14:20.357 Normal NVM Subsystem Shutdown event: Not Supported 00:14:20.357 Zone Descriptor Change Notices: Not Supported 00:14:20.357 Discovery Log Change Notices: Not Supported 00:14:20.357 Controller Attributes 00:14:20.357 128-bit Host Identifier: Not Supported 00:14:20.357 Non-Operational Permissive Mode: Not Supported 00:14:20.357 NVM Sets: Not Supported 00:14:20.357 Read Recovery Levels: Not Supported 00:14:20.357 Endurance Groups: Supported 00:14:20.357 Predictable Latency Mode: Not Supported 00:14:20.357 Traffic Based Keep ALive: Not Supported 00:14:20.357 Namespace Granularity: Not Supported 00:14:20.357 SQ Associations: Not Supported 00:14:20.357 UUID List: Not Supported 00:14:20.357 Multi-Domain Subsystem: Not Supported 00:14:20.357 Fixed Capacity Management: Not Supported 00:14:20.357 Variable Capacity Management: Not Supported 00:14:20.357 Delete Endurance Group: Not Supported 00:14:20.357 Delete NVM Set: Not Supported 00:14:20.357 Extended LBA Formats Supported: Supported 00:14:20.357 Flexible Data Placement Supported: Supported 00:14:20.357 00:14:20.357 Controller Memory Buffer Support 00:14:20.357 ================================ 00:14:20.357 Supported: No 00:14:20.357 00:14:20.357 Persistent Memory Region Support 00:14:20.357 ================================ 00:14:20.357 Supported: No 00:14:20.357 00:14:20.357 Admin Command Set Attributes 00:14:20.357 ============================ 00:14:20.357 Security Send/Receive: Not Supported 00:14:20.357 Format NVM: Supported 00:14:20.357 Firmware Activate/Download: Not Supported 00:14:20.357 Namespace Management: Supported 00:14:20.357 Device Self-Test: Not Supported 00:14:20.357 Directives: Supported 00:14:20.357 NVMe-MI: Not Supported 00:14:20.357 Virtualization Management: Not Supported 00:14:20.357 Doorbell Buffer Config: Supported 00:14:20.357 Get LBA Status Capability: Not Supported 00:14:20.357 Command & Feature Lockdown Capability: Not Supported 00:14:20.357 Abort Command Limit: 4 00:14:20.357 Async Event Request Limit: 4 00:14:20.357 Number of Firmware Slots: N/A 00:14:20.357 Firmware Slot 1 Read-Only: N/A 00:14:20.357 Firmware Activation Without Reset: N/A 00:14:20.357 Multiple Update Detection Support: N/A 00:14:20.357 Firmware Update Granularity: No Information Provided 00:14:20.357 Per-Namespace SMART Log: Yes 00:14:20.357 Asymmetric Namespace Access Log Page: Not Supported 00:14:20.357 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:20.357 Command Effects Log Page: Supported 00:14:20.357 Get Log Page Extended Data: Supported 00:14:20.357 Telemetry Log Pages: Not Supported 00:14:20.357 Persistent Event Log Pages: Not Supported 00:14:20.357 Supported Log Pages Log Page: May Support 00:14:20.357 Commands Supported & Effects Log Page: Not Supported 00:14:20.357 Feature Identifiers & Effects Log Page:May Support 00:14:20.357 NVMe-MI Commands & Effects Log Page: May Support 00:14:20.357 Data Area 4 for Telemetry Log: Not Supported 00:14:20.357 Error Log Page Entries Supported: 1 00:14:20.357 Keep Alive: Not Supported 00:14:20.357 00:14:20.357 NVM Command Set Attributes 00:14:20.357 ========================== 00:14:20.357 Submission Queue Entry Size 00:14:20.357 Max: 64 00:14:20.357 Min: 64 00:14:20.357 Completion Queue Entry Size 00:14:20.357 Max: 16 00:14:20.357 Min: 16 00:14:20.357 Number of Namespaces: 256 00:14:20.357 Compare Command: Supported 00:14:20.357 Write Uncorrectable Command: Not Supported 00:14:20.357 Dataset Management Command: Supported 00:14:20.357 Write Zeroes Command: Supported 00:14:20.357 Set Features Save Field: Supported 00:14:20.357 Reservations: Not Supported 00:14:20.357 Timestamp: Supported 00:14:20.357 Copy: Supported 00:14:20.357 Volatile Write Cache: Present 00:14:20.357 Atomic Write Unit (Normal): 1 00:14:20.357 Atomic Write Unit (PFail): 1 00:14:20.357 Atomic Compare & Write Unit: 1 00:14:20.357 Fused Compare & Write: Not Supported 00:14:20.357 Scatter-Gather List 00:14:20.357 SGL Command Set: Supported 00:14:20.357 SGL Keyed: Not Supported 00:14:20.357 SGL Bit Bucket Descriptor: Not Supported 00:14:20.357 SGL Metadata Pointer: Not Supported 00:14:20.357 Oversized SGL: Not Supported 00:14:20.357 SGL Metadata Address: Not Supported 00:14:20.357 SGL Offset: Not Supported 00:14:20.357 Transport SGL Data Block: Not Supported 00:14:20.357 Replay Protected Memory Block: Not Supported 00:14:20.357 00:14:20.357 Firmware Slot Information 00:14:20.357 ========================= 00:14:20.357 Active slot: 1 00:14:20.357 Slot 1 Firmware Revision: 1.0 00:14:20.357 00:14:20.357 00:14:20.357 Commands Supported and Effects 00:14:20.357 ============================== 00:14:20.357 Admin Commands 00:14:20.357 -------------- 00:14:20.357 Delete I/O Submission Queue (00h): Supported 00:14:20.357 Create I/O Submission Queue (01h): Supported 00:14:20.357 Get Log Page (02h): Supported 00:14:20.357 Delete I/O Completion Queue (04h): Supported 00:14:20.357 Create I/O Completion Queue (05h): Supported 00:14:20.357 Identify (06h): Supported 00:14:20.357 Abort (08h): Supported 00:14:20.357 Set Features (09h): Supported 00:14:20.357 Get Features (0Ah): Supported 00:14:20.357 Asynchronous Event Request (0Ch): Supported 00:14:20.357 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:20.357 Directive Send (19h): Supported 00:14:20.357 Directive Receive (1Ah): Supported 00:14:20.357 Virtualization Management (1Ch): Supported 00:14:20.357 Doorbell Buffer Config (7Ch): Supported 00:14:20.357 Format NVM (80h): Supported LBA-Change 00:14:20.357 I/O Commands 00:14:20.357 ------------ 00:14:20.357 Flush (00h): Supported LBA-Change 00:14:20.357 Write (01h): Supported LBA-Change 00:14:20.357 Read (02h): Supported 00:14:20.357 Compare (05h): Supported 00:14:20.357 Write Zeroes (08h): Supported LBA-Change 00:14:20.357 Dataset Management (09h): Supported LBA-Change 00:14:20.357 Unknown (0Ch): Supported 00:14:20.357 Unknown (12h): Supported 00:14:20.357 Copy (19h): Supported LBA-Change 00:14:20.357 Unknown (1Dh): Supported LBA-Change 00:14:20.357 00:14:20.357 Error Log 00:14:20.357 ========= 00:14:20.357 00:14:20.357 Arbitration 00:14:20.357 =========== 00:14:20.357 Arbitration Burst: no limit 00:14:20.357 00:14:20.357 Power Management 00:14:20.357 ================ 00:14:20.357 Number of Power States: 1 00:14:20.357 Current Power State: Power State #0 00:14:20.357 Power State #0: 00:14:20.357 Max Power: 25.00 W 00:14:20.357 Non-Operational State: Operational 00:14:20.357 Entry Latency: 16 microseconds 00:14:20.357 Exit Latency: 4 microseconds 00:14:20.357 Relative Read Throughput: 0 00:14:20.357 Relative Read Latency: 0 00:14:20.357 Relative Write Throughput: 0 00:14:20.357 Relative Write Latency: 0 00:14:20.357 Idle Power: Not Reported 00:14:20.357 Active Power: Not Reported 00:14:20.357 Non-Operational Permissive Mode: Not Supported 00:14:20.357 00:14:20.357 Health Information 00:14:20.357 ================== 00:14:20.357 Critical Warnings: 00:14:20.357 Available Spare Space: OK 00:14:20.357 Temperature: OK 00:14:20.357 Device Reliability: OK 00:14:20.357 Read Only: No 00:14:20.357 Volatile Memory Backup: OK 00:14:20.358 Current Temperature: 323 Kelvin (50 Celsius) 00:14:20.358 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:20.358 Available Spare: 0% 00:14:20.358 Available Spare Threshold: 0% 00:14:20.358 Life Percentage Used: [2024-11-20 15:26:06.265917] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64355 terminated unexpected 00:14:20.358 [2024-11-20 15:26:06.266629] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64355 terminated unexpected 00:14:20.358 0% 00:14:20.358 Data Units Read: 811 00:14:20.358 Data Units Written: 740 00:14:20.358 Host Read Commands: 33471 00:14:20.358 Host Write Commands: 32894 00:14:20.358 Controller Busy Time: 0 minutes 00:14:20.358 Power Cycles: 0 00:14:20.358 Power On Hours: 0 hours 00:14:20.358 Unsafe Shutdowns: 0 00:14:20.358 Unrecoverable Media Errors: 0 00:14:20.358 Lifetime Error Log Entries: 0 00:14:20.358 Warning Temperature Time: 0 minutes 00:14:20.358 Critical Temperature Time: 0 minutes 00:14:20.358 00:14:20.358 Number of Queues 00:14:20.358 ================ 00:14:20.358 Number of I/O Submission Queues: 64 00:14:20.358 Number of I/O Completion Queues: 64 00:14:20.358 00:14:20.358 ZNS Specific Controller Data 00:14:20.358 ============================ 00:14:20.358 Zone Append Size Limit: 0 00:14:20.358 00:14:20.358 00:14:20.358 Active Namespaces 00:14:20.358 ================= 00:14:20.358 Namespace ID:1 00:14:20.358 Error Recovery Timeout: Unlimited 00:14:20.358 Command Set Identifier: NVM (00h) 00:14:20.358 Deallocate: Supported 00:14:20.358 Deallocated/Unwritten Error: Supported 00:14:20.358 Deallocated Read Value: All 0x00 00:14:20.358 Deallocate in Write Zeroes: Not Supported 00:14:20.358 Deallocated Guard Field: 0xFFFF 00:14:20.358 Flush: Supported 00:14:20.358 Reservation: Not Supported 00:14:20.358 Namespace Sharing Capabilities: Multiple Controllers 00:14:20.358 Size (in LBAs): 262144 (1GiB) 00:14:20.358 Capacity (in LBAs): 262144 (1GiB) 00:14:20.358 Utilization (in LBAs): 262144 (1GiB) 00:14:20.358 Thin Provisioning: Not Supported 00:14:20.358 Per-NS Atomic Units: No 00:14:20.358 Maximum Single Source Range Length: 128 00:14:20.358 Maximum Copy Length: 128 00:14:20.358 Maximum Source Range Count: 128 00:14:20.358 NGUID/EUI64 Never Reused: No 00:14:20.358 Namespace Write Protected: No 00:14:20.358 Endurance group ID: 1 00:14:20.358 Number of LBA Formats: 8 00:14:20.358 Current LBA Format: LBA Format #04 00:14:20.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.358 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.358 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.358 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.358 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.358 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.358 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.358 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.358 00:14:20.358 Get Feature FDP: 00:14:20.358 ================ 00:14:20.358 Enabled: Yes 00:14:20.358 FDP configuration index: 0 00:14:20.358 00:14:20.358 FDP configurations log page 00:14:20.358 =========================== 00:14:20.358 Number of FDP configurations: 1 00:14:20.358 Version: 0 00:14:20.358 Size: 112 00:14:20.358 FDP Configuration Descriptor: 0 00:14:20.358 Descriptor Size: 96 00:14:20.358 Reclaim Group Identifier format: 2 00:14:20.358 FDP Volatile Write Cache: Not Present 00:14:20.358 FDP Configuration: Valid 00:14:20.358 Vendor Specific Size: 0 00:14:20.358 Number of Reclaim Groups: 2 00:14:20.358 Number of Recalim Unit Handles: 8 00:14:20.358 Max Placement Identifiers: 128 00:14:20.358 Number of Namespaces Suppprted: 256 00:14:20.358 Reclaim unit Nominal Size: 6000000 bytes 00:14:20.358 Estimated Reclaim Unit Time Limit: Not Reported 00:14:20.358 RUH Desc #000: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #001: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #002: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #003: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #004: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #005: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #006: RUH Type: Initially Isolated 00:14:20.358 RUH Desc #007: RUH Type: Initially Isolated 00:14:20.358 00:14:20.358 FDP reclaim unit handle usage log page 00:14:20.358 ====================================== 00:14:20.358 Number of Reclaim Unit Handles: 8 00:14:20.358 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:20.358 RUH Usage Desc #001: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #002: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #003: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #004: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #005: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #006: RUH Attributes: Unused 00:14:20.358 RUH Usage Desc #007: RUH Attributes: Unused 00:14:20.358 00:14:20.358 FDP statistics log page 00:14:20.358 ======================= 00:14:20.358 Host bytes with metadata written: 464494592 00:14:20.358 Medi[2024-11-20 15:26:06.268621] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64355 terminated unexpected 00:14:20.358 a bytes with metadata written: 464547840 00:14:20.358 Media bytes erased: 0 00:14:20.358 00:14:20.358 FDP events log page 00:14:20.358 =================== 00:14:20.358 Number of FDP events: 0 00:14:20.358 00:14:20.358 NVM Specific Namespace Data 00:14:20.358 =========================== 00:14:20.358 Logical Block Storage Tag Mask: 0 00:14:20.358 Protection Information Capabilities: 00:14:20.358 16b Guard Protection Information Storage Tag Support: No 00:14:20.358 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.358 Storage Tag Check Read Support: No 00:14:20.358 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.358 ===================================================== 00:14:20.358 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:20.358 ===================================================== 00:14:20.358 Controller Capabilities/Features 00:14:20.358 ================================ 00:14:20.358 Vendor ID: 1b36 00:14:20.358 Subsystem Vendor ID: 1af4 00:14:20.358 Serial Number: 12342 00:14:20.358 Model Number: QEMU NVMe Ctrl 00:14:20.358 Firmware Version: 8.0.0 00:14:20.358 Recommended Arb Burst: 6 00:14:20.358 IEEE OUI Identifier: 00 54 52 00:14:20.358 Multi-path I/O 00:14:20.358 May have multiple subsystem ports: No 00:14:20.358 May have multiple controllers: No 00:14:20.358 Associated with SR-IOV VF: No 00:14:20.358 Max Data Transfer Size: 524288 00:14:20.358 Max Number of Namespaces: 256 00:14:20.358 Max Number of I/O Queues: 64 00:14:20.358 NVMe Specification Version (VS): 1.4 00:14:20.358 NVMe Specification Version (Identify): 1.4 00:14:20.358 Maximum Queue Entries: 2048 00:14:20.358 Contiguous Queues Required: Yes 00:14:20.358 Arbitration Mechanisms Supported 00:14:20.358 Weighted Round Robin: Not Supported 00:14:20.358 Vendor Specific: Not Supported 00:14:20.358 Reset Timeout: 7500 ms 00:14:20.358 Doorbell Stride: 4 bytes 00:14:20.359 NVM Subsystem Reset: Not Supported 00:14:20.359 Command Sets Supported 00:14:20.359 NVM Command Set: Supported 00:14:20.359 Boot Partition: Not Supported 00:14:20.359 Memory Page Size Minimum: 4096 bytes 00:14:20.359 Memory Page Size Maximum: 65536 bytes 00:14:20.359 Persistent Memory Region: Not Supported 00:14:20.359 Optional Asynchronous Events Supported 00:14:20.359 Namespace Attribute Notices: Supported 00:14:20.359 Firmware Activation Notices: Not Supported 00:14:20.359 ANA Change Notices: Not Supported 00:14:20.359 PLE Aggregate Log Change Notices: Not Supported 00:14:20.359 LBA Status Info Alert Notices: Not Supported 00:14:20.359 EGE Aggregate Log Change Notices: Not Supported 00:14:20.359 Normal NVM Subsystem Shutdown event: Not Supported 00:14:20.359 Zone Descriptor Change Notices: Not Supported 00:14:20.359 Discovery Log Change Notices: Not Supported 00:14:20.359 Controller Attributes 00:14:20.359 128-bit Host Identifier: Not Supported 00:14:20.359 Non-Operational Permissive Mode: Not Supported 00:14:20.359 NVM Sets: Not Supported 00:14:20.359 Read Recovery Levels: Not Supported 00:14:20.359 Endurance Groups: Not Supported 00:14:20.359 Predictable Latency Mode: Not Supported 00:14:20.359 Traffic Based Keep ALive: Not Supported 00:14:20.359 Namespace Granularity: Not Supported 00:14:20.359 SQ Associations: Not Supported 00:14:20.359 UUID List: Not Supported 00:14:20.359 Multi-Domain Subsystem: Not Supported 00:14:20.359 Fixed Capacity Management: Not Supported 00:14:20.359 Variable Capacity Management: Not Supported 00:14:20.359 Delete Endurance Group: Not Supported 00:14:20.359 Delete NVM Set: Not Supported 00:14:20.359 Extended LBA Formats Supported: Supported 00:14:20.359 Flexible Data Placement Supported: Not Supported 00:14:20.359 00:14:20.359 Controller Memory Buffer Support 00:14:20.359 ================================ 00:14:20.359 Supported: No 00:14:20.359 00:14:20.359 Persistent Memory Region Support 00:14:20.359 ================================ 00:14:20.359 Supported: No 00:14:20.359 00:14:20.359 Admin Command Set Attributes 00:14:20.359 ============================ 00:14:20.359 Security Send/Receive: Not Supported 00:14:20.359 Format NVM: Supported 00:14:20.359 Firmware Activate/Download: Not Supported 00:14:20.359 Namespace Management: Supported 00:14:20.359 Device Self-Test: Not Supported 00:14:20.359 Directives: Supported 00:14:20.359 NVMe-MI: Not Supported 00:14:20.359 Virtualization Management: Not Supported 00:14:20.359 Doorbell Buffer Config: Supported 00:14:20.359 Get LBA Status Capability: Not Supported 00:14:20.359 Command & Feature Lockdown Capability: Not Supported 00:14:20.359 Abort Command Limit: 4 00:14:20.359 Async Event Request Limit: 4 00:14:20.359 Number of Firmware Slots: N/A 00:14:20.359 Firmware Slot 1 Read-Only: N/A 00:14:20.359 Firmware Activation Without Reset: N/A 00:14:20.359 Multiple Update Detection Support: N/A 00:14:20.359 Firmware Update Granularity: No Information Provided 00:14:20.359 Per-Namespace SMART Log: Yes 00:14:20.359 Asymmetric Namespace Access Log Page: Not Supported 00:14:20.359 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:20.359 Command Effects Log Page: Supported 00:14:20.359 Get Log Page Extended Data: Supported 00:14:20.359 Telemetry Log Pages: Not Supported 00:14:20.359 Persistent Event Log Pages: Not Supported 00:14:20.359 Supported Log Pages Log Page: May Support 00:14:20.359 Commands Supported & Effects Log Page: Not Supported 00:14:20.359 Feature Identifiers & Effects Log Page:May Support 00:14:20.359 NVMe-MI Commands & Effects Log Page: May Support 00:14:20.359 Data Area 4 for Telemetry Log: Not Supported 00:14:20.359 Error Log Page Entries Supported: 1 00:14:20.359 Keep Alive: Not Supported 00:14:20.359 00:14:20.359 NVM Command Set Attributes 00:14:20.359 ========================== 00:14:20.359 Submission Queue Entry Size 00:14:20.359 Max: 64 00:14:20.359 Min: 64 00:14:20.359 Completion Queue Entry Size 00:14:20.359 Max: 16 00:14:20.359 Min: 16 00:14:20.359 Number of Namespaces: 256 00:14:20.359 Compare Command: Supported 00:14:20.359 Write Uncorrectable Command: Not Supported 00:14:20.359 Dataset Management Command: Supported 00:14:20.359 Write Zeroes Command: Supported 00:14:20.359 Set Features Save Field: Supported 00:14:20.359 Reservations: Not Supported 00:14:20.359 Timestamp: Supported 00:14:20.359 Copy: Supported 00:14:20.359 Volatile Write Cache: Present 00:14:20.359 Atomic Write Unit (Normal): 1 00:14:20.359 Atomic Write Unit (PFail): 1 00:14:20.359 Atomic Compare & Write Unit: 1 00:14:20.359 Fused Compare & Write: Not Supported 00:14:20.359 Scatter-Gather List 00:14:20.359 SGL Command Set: Supported 00:14:20.359 SGL Keyed: Not Supported 00:14:20.359 SGL Bit Bucket Descriptor: Not Supported 00:14:20.359 SGL Metadata Pointer: Not Supported 00:14:20.359 Oversized SGL: Not Supported 00:14:20.359 SGL Metadata Address: Not Supported 00:14:20.359 SGL Offset: Not Supported 00:14:20.359 Transport SGL Data Block: Not Supported 00:14:20.359 Replay Protected Memory Block: Not Supported 00:14:20.359 00:14:20.359 Firmware Slot Information 00:14:20.359 ========================= 00:14:20.359 Active slot: 1 00:14:20.359 Slot 1 Firmware Revision: 1.0 00:14:20.359 00:14:20.359 00:14:20.359 Commands Supported and Effects 00:14:20.359 ============================== 00:14:20.359 Admin Commands 00:14:20.359 -------------- 00:14:20.359 Delete I/O Submission Queue (00h): Supported 00:14:20.359 Create I/O Submission Queue (01h): Supported 00:14:20.359 Get Log Page (02h): Supported 00:14:20.359 Delete I/O Completion Queue (04h): Supported 00:14:20.359 Create I/O Completion Queue (05h): Supported 00:14:20.359 Identify (06h): Supported 00:14:20.359 Abort (08h): Supported 00:14:20.359 Set Features (09h): Supported 00:14:20.359 Get Features (0Ah): Supported 00:14:20.359 Asynchronous Event Request (0Ch): Supported 00:14:20.359 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:20.359 Directive Send (19h): Supported 00:14:20.359 Directive Receive (1Ah): Supported 00:14:20.359 Virtualization Management (1Ch): Supported 00:14:20.359 Doorbell Buffer Config (7Ch): Supported 00:14:20.359 Format NVM (80h): Supported LBA-Change 00:14:20.359 I/O Commands 00:14:20.359 ------------ 00:14:20.359 Flush (00h): Supported LBA-Change 00:14:20.359 Write (01h): Supported LBA-Change 00:14:20.359 Read (02h): Supported 00:14:20.359 Compare (05h): Supported 00:14:20.359 Write Zeroes (08h): Supported LBA-Change 00:14:20.359 Dataset Management (09h): Supported LBA-Change 00:14:20.359 Unknown (0Ch): Supported 00:14:20.359 Unknown (12h): Supported 00:14:20.359 Copy (19h): Supported LBA-Change 00:14:20.359 Unknown (1Dh): Supported LBA-Change 00:14:20.359 00:14:20.359 Error Log 00:14:20.359 ========= 00:14:20.359 00:14:20.359 Arbitration 00:14:20.359 =========== 00:14:20.359 Arbitration Burst: no limit 00:14:20.359 00:14:20.359 Power Management 00:14:20.359 ================ 00:14:20.359 Number of Power States: 1 00:14:20.359 Current Power State: Power State #0 00:14:20.359 Power State #0: 00:14:20.359 Max Power: 25.00 W 00:14:20.359 Non-Operational State: Operational 00:14:20.359 Entry Latency: 16 microseconds 00:14:20.359 Exit Latency: 4 microseconds 00:14:20.359 Relative Read Throughput: 0 00:14:20.359 Relative Read Latency: 0 00:14:20.359 Relative Write Throughput: 0 00:14:20.359 Relative Write Latency: 0 00:14:20.359 Idle Power: Not Reported 00:14:20.359 Active Power: Not Reported 00:14:20.359 Non-Operational Permissive Mode: Not Supported 00:14:20.359 00:14:20.359 Health Information 00:14:20.359 ================== 00:14:20.359 Critical Warnings: 00:14:20.359 Available Spare Space: OK 00:14:20.359 Temperature: OK 00:14:20.359 Device Reliability: OK 00:14:20.359 Read Only: No 00:14:20.359 Volatile Memory Backup: OK 00:14:20.359 Current Temperature: 323 Kelvin (50 Celsius) 00:14:20.359 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:20.359 Available Spare: 0% 00:14:20.359 Available Spare Threshold: 0% 00:14:20.359 Life Percentage Used: 0% 00:14:20.359 Data Units Read: 2246 00:14:20.359 Data Units Written: 2033 00:14:20.359 Host Read Commands: 98607 00:14:20.359 Host Write Commands: 96878 00:14:20.359 Controller Busy Time: 0 minutes 00:14:20.359 Power Cycles: 0 00:14:20.359 Power On Hours: 0 hours 00:14:20.359 Unsafe Shutdowns: 0 00:14:20.359 Unrecoverable Media Errors: 0 00:14:20.359 Lifetime Error Log Entries: 0 00:14:20.359 Warning Temperature Time: 0 minutes 00:14:20.359 Critical Temperature Time: 0 minutes 00:14:20.359 00:14:20.359 Number of Queues 00:14:20.359 ================ 00:14:20.359 Number of I/O Submission Queues: 64 00:14:20.359 Number of I/O Completion Queues: 64 00:14:20.359 00:14:20.360 ZNS Specific Controller Data 00:14:20.360 ============================ 00:14:20.360 Zone Append Size Limit: 0 00:14:20.360 00:14:20.360 00:14:20.360 Active Namespaces 00:14:20.360 ================= 00:14:20.360 Namespace ID:1 00:14:20.360 Error Recovery Timeout: Unlimited 00:14:20.360 Command Set Identifier: NVM (00h) 00:14:20.360 Deallocate: Supported 00:14:20.360 Deallocated/Unwritten Error: Supported 00:14:20.360 Deallocated Read Value: All 0x00 00:14:20.360 Deallocate in Write Zeroes: Not Supported 00:14:20.360 Deallocated Guard Field: 0xFFFF 00:14:20.360 Flush: Supported 00:14:20.360 Reservation: Not Supported 00:14:20.360 Namespace Sharing Capabilities: Private 00:14:20.360 Size (in LBAs): 1048576 (4GiB) 00:14:20.360 Capacity (in LBAs): 1048576 (4GiB) 00:14:20.360 Utilization (in LBAs): 1048576 (4GiB) 00:14:20.360 Thin Provisioning: Not Supported 00:14:20.360 Per-NS Atomic Units: No 00:14:20.360 Maximum Single Source Range Length: 128 00:14:20.360 Maximum Copy Length: 128 00:14:20.360 Maximum Source Range Count: 128 00:14:20.360 NGUID/EUI64 Never Reused: No 00:14:20.360 Namespace Write Protected: No 00:14:20.360 Number of LBA Formats: 8 00:14:20.360 Current LBA Format: LBA Format #04 00:14:20.360 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.360 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.360 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.360 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.360 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.360 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.360 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.360 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.360 00:14:20.360 NVM Specific Namespace Data 00:14:20.360 =========================== 00:14:20.360 Logical Block Storage Tag Mask: 0 00:14:20.360 Protection Information Capabilities: 00:14:20.360 16b Guard Protection Information Storage Tag Support: No 00:14:20.360 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.360 Storage Tag Check Read Support: No 00:14:20.360 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Namespace ID:2 00:14:20.360 Error Recovery Timeout: Unlimited 00:14:20.360 Command Set Identifier: NVM (00h) 00:14:20.360 Deallocate: Supported 00:14:20.360 Deallocated/Unwritten Error: Supported 00:14:20.360 Deallocated Read Value: All 0x00 00:14:20.360 Deallocate in Write Zeroes: Not Supported 00:14:20.360 Deallocated Guard Field: 0xFFFF 00:14:20.360 Flush: Supported 00:14:20.360 Reservation: Not Supported 00:14:20.360 Namespace Sharing Capabilities: Private 00:14:20.360 Size (in LBAs): 1048576 (4GiB) 00:14:20.360 Capacity (in LBAs): 1048576 (4GiB) 00:14:20.360 Utilization (in LBAs): 1048576 (4GiB) 00:14:20.360 Thin Provisioning: Not Supported 00:14:20.360 Per-NS Atomic Units: No 00:14:20.360 Maximum Single Source Range Length: 128 00:14:20.360 Maximum Copy Length: 128 00:14:20.360 Maximum Source Range Count: 128 00:14:20.360 NGUID/EUI64 Never Reused: No 00:14:20.360 Namespace Write Protected: No 00:14:20.360 Number of LBA Formats: 8 00:14:20.360 Current LBA Format: LBA Format #04 00:14:20.360 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.360 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.360 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.360 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.360 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.360 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.360 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.360 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.360 00:14:20.360 NVM Specific Namespace Data 00:14:20.360 =========================== 00:14:20.360 Logical Block Storage Tag Mask: 0 00:14:20.360 Protection Information Capabilities: 00:14:20.360 16b Guard Protection Information Storage Tag Support: No 00:14:20.360 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.360 Storage Tag Check Read Support: No 00:14:20.360 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.360 Namespace ID:3 00:14:20.360 Error Recovery Timeout: Unlimited 00:14:20.360 Command Set Identifier: NVM (00h) 00:14:20.360 Deallocate: Supported 00:14:20.360 Deallocated/Unwritten Error: Supported 00:14:20.360 Deallocated Read Value: All 0x00 00:14:20.360 Deallocate in Write Zeroes: Not Supported 00:14:20.360 Deallocated Guard Field: 0xFFFF 00:14:20.360 Flush: Supported 00:14:20.360 Reservation: Not Supported 00:14:20.360 Namespace Sharing Capabilities: Private 00:14:20.360 Size (in LBAs): 1048576 (4GiB) 00:14:20.640 Capacity (in LBAs): 1048576 (4GiB) 00:14:20.640 Utilization (in LBAs): 1048576 (4GiB) 00:14:20.640 Thin Provisioning: Not Supported 00:14:20.640 Per-NS Atomic Units: No 00:14:20.640 Maximum Single Source Range Length: 128 00:14:20.640 Maximum Copy Length: 128 00:14:20.640 Maximum Source Range Count: 128 00:14:20.640 NGUID/EUI64 Never Reused: No 00:14:20.640 Namespace Write Protected: No 00:14:20.640 Number of LBA Formats: 8 00:14:20.640 Current LBA Format: LBA Format #04 00:14:20.640 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.640 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.640 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.640 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.640 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.640 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.640 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.640 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.640 00:14:20.640 NVM Specific Namespace Data 00:14:20.640 =========================== 00:14:20.640 Logical Block Storage Tag Mask: 0 00:14:20.640 Protection Information Capabilities: 00:14:20.640 16b Guard Protection Information Storage Tag Support: No 00:14:20.640 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.640 Storage Tag Check Read Support: No 00:14:20.640 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.640 15:26:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:20.640 15:26:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:14:20.947 ===================================================== 00:14:20.947 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:20.947 ===================================================== 00:14:20.947 Controller Capabilities/Features 00:14:20.947 ================================ 00:14:20.947 Vendor ID: 1b36 00:14:20.947 Subsystem Vendor ID: 1af4 00:14:20.947 Serial Number: 12340 00:14:20.947 Model Number: QEMU NVMe Ctrl 00:14:20.947 Firmware Version: 8.0.0 00:14:20.947 Recommended Arb Burst: 6 00:14:20.947 IEEE OUI Identifier: 00 54 52 00:14:20.947 Multi-path I/O 00:14:20.947 May have multiple subsystem ports: No 00:14:20.947 May have multiple controllers: No 00:14:20.947 Associated with SR-IOV VF: No 00:14:20.947 Max Data Transfer Size: 524288 00:14:20.947 Max Number of Namespaces: 256 00:14:20.947 Max Number of I/O Queues: 64 00:14:20.947 NVMe Specification Version (VS): 1.4 00:14:20.947 NVMe Specification Version (Identify): 1.4 00:14:20.947 Maximum Queue Entries: 2048 00:14:20.947 Contiguous Queues Required: Yes 00:14:20.947 Arbitration Mechanisms Supported 00:14:20.947 Weighted Round Robin: Not Supported 00:14:20.947 Vendor Specific: Not Supported 00:14:20.947 Reset Timeout: 7500 ms 00:14:20.947 Doorbell Stride: 4 bytes 00:14:20.947 NVM Subsystem Reset: Not Supported 00:14:20.947 Command Sets Supported 00:14:20.947 NVM Command Set: Supported 00:14:20.947 Boot Partition: Not Supported 00:14:20.947 Memory Page Size Minimum: 4096 bytes 00:14:20.947 Memory Page Size Maximum: 65536 bytes 00:14:20.947 Persistent Memory Region: Not Supported 00:14:20.947 Optional Asynchronous Events Supported 00:14:20.947 Namespace Attribute Notices: Supported 00:14:20.947 Firmware Activation Notices: Not Supported 00:14:20.947 ANA Change Notices: Not Supported 00:14:20.947 PLE Aggregate Log Change Notices: Not Supported 00:14:20.947 LBA Status Info Alert Notices: Not Supported 00:14:20.947 EGE Aggregate Log Change Notices: Not Supported 00:14:20.947 Normal NVM Subsystem Shutdown event: Not Supported 00:14:20.947 Zone Descriptor Change Notices: Not Supported 00:14:20.947 Discovery Log Change Notices: Not Supported 00:14:20.947 Controller Attributes 00:14:20.947 128-bit Host Identifier: Not Supported 00:14:20.947 Non-Operational Permissive Mode: Not Supported 00:14:20.947 NVM Sets: Not Supported 00:14:20.947 Read Recovery Levels: Not Supported 00:14:20.947 Endurance Groups: Not Supported 00:14:20.947 Predictable Latency Mode: Not Supported 00:14:20.947 Traffic Based Keep ALive: Not Supported 00:14:20.947 Namespace Granularity: Not Supported 00:14:20.947 SQ Associations: Not Supported 00:14:20.947 UUID List: Not Supported 00:14:20.947 Multi-Domain Subsystem: Not Supported 00:14:20.947 Fixed Capacity Management: Not Supported 00:14:20.947 Variable Capacity Management: Not Supported 00:14:20.947 Delete Endurance Group: Not Supported 00:14:20.947 Delete NVM Set: Not Supported 00:14:20.947 Extended LBA Formats Supported: Supported 00:14:20.947 Flexible Data Placement Supported: Not Supported 00:14:20.947 00:14:20.947 Controller Memory Buffer Support 00:14:20.947 ================================ 00:14:20.947 Supported: No 00:14:20.947 00:14:20.947 Persistent Memory Region Support 00:14:20.947 ================================ 00:14:20.947 Supported: No 00:14:20.947 00:14:20.947 Admin Command Set Attributes 00:14:20.947 ============================ 00:14:20.947 Security Send/Receive: Not Supported 00:14:20.947 Format NVM: Supported 00:14:20.947 Firmware Activate/Download: Not Supported 00:14:20.947 Namespace Management: Supported 00:14:20.947 Device Self-Test: Not Supported 00:14:20.947 Directives: Supported 00:14:20.947 NVMe-MI: Not Supported 00:14:20.947 Virtualization Management: Not Supported 00:14:20.947 Doorbell Buffer Config: Supported 00:14:20.947 Get LBA Status Capability: Not Supported 00:14:20.947 Command & Feature Lockdown Capability: Not Supported 00:14:20.947 Abort Command Limit: 4 00:14:20.947 Async Event Request Limit: 4 00:14:20.947 Number of Firmware Slots: N/A 00:14:20.947 Firmware Slot 1 Read-Only: N/A 00:14:20.947 Firmware Activation Without Reset: N/A 00:14:20.947 Multiple Update Detection Support: N/A 00:14:20.947 Firmware Update Granularity: No Information Provided 00:14:20.947 Per-Namespace SMART Log: Yes 00:14:20.947 Asymmetric Namespace Access Log Page: Not Supported 00:14:20.947 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:20.947 Command Effects Log Page: Supported 00:14:20.947 Get Log Page Extended Data: Supported 00:14:20.947 Telemetry Log Pages: Not Supported 00:14:20.947 Persistent Event Log Pages: Not Supported 00:14:20.947 Supported Log Pages Log Page: May Support 00:14:20.947 Commands Supported & Effects Log Page: Not Supported 00:14:20.947 Feature Identifiers & Effects Log Page:May Support 00:14:20.947 NVMe-MI Commands & Effects Log Page: May Support 00:14:20.947 Data Area 4 for Telemetry Log: Not Supported 00:14:20.947 Error Log Page Entries Supported: 1 00:14:20.947 Keep Alive: Not Supported 00:14:20.947 00:14:20.947 NVM Command Set Attributes 00:14:20.947 ========================== 00:14:20.947 Submission Queue Entry Size 00:14:20.947 Max: 64 00:14:20.947 Min: 64 00:14:20.947 Completion Queue Entry Size 00:14:20.947 Max: 16 00:14:20.947 Min: 16 00:14:20.947 Number of Namespaces: 256 00:14:20.947 Compare Command: Supported 00:14:20.947 Write Uncorrectable Command: Not Supported 00:14:20.947 Dataset Management Command: Supported 00:14:20.947 Write Zeroes Command: Supported 00:14:20.947 Set Features Save Field: Supported 00:14:20.947 Reservations: Not Supported 00:14:20.947 Timestamp: Supported 00:14:20.947 Copy: Supported 00:14:20.947 Volatile Write Cache: Present 00:14:20.947 Atomic Write Unit (Normal): 1 00:14:20.947 Atomic Write Unit (PFail): 1 00:14:20.947 Atomic Compare & Write Unit: 1 00:14:20.947 Fused Compare & Write: Not Supported 00:14:20.947 Scatter-Gather List 00:14:20.947 SGL Command Set: Supported 00:14:20.947 SGL Keyed: Not Supported 00:14:20.947 SGL Bit Bucket Descriptor: Not Supported 00:14:20.947 SGL Metadata Pointer: Not Supported 00:14:20.947 Oversized SGL: Not Supported 00:14:20.947 SGL Metadata Address: Not Supported 00:14:20.947 SGL Offset: Not Supported 00:14:20.947 Transport SGL Data Block: Not Supported 00:14:20.947 Replay Protected Memory Block: Not Supported 00:14:20.947 00:14:20.947 Firmware Slot Information 00:14:20.947 ========================= 00:14:20.947 Active slot: 1 00:14:20.947 Slot 1 Firmware Revision: 1.0 00:14:20.947 00:14:20.947 00:14:20.947 Commands Supported and Effects 00:14:20.947 ============================== 00:14:20.947 Admin Commands 00:14:20.947 -------------- 00:14:20.947 Delete I/O Submission Queue (00h): Supported 00:14:20.947 Create I/O Submission Queue (01h): Supported 00:14:20.947 Get Log Page (02h): Supported 00:14:20.947 Delete I/O Completion Queue (04h): Supported 00:14:20.948 Create I/O Completion Queue (05h): Supported 00:14:20.948 Identify (06h): Supported 00:14:20.948 Abort (08h): Supported 00:14:20.948 Set Features (09h): Supported 00:14:20.948 Get Features (0Ah): Supported 00:14:20.948 Asynchronous Event Request (0Ch): Supported 00:14:20.948 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:20.948 Directive Send (19h): Supported 00:14:20.948 Directive Receive (1Ah): Supported 00:14:20.948 Virtualization Management (1Ch): Supported 00:14:20.948 Doorbell Buffer Config (7Ch): Supported 00:14:20.948 Format NVM (80h): Supported LBA-Change 00:14:20.948 I/O Commands 00:14:20.948 ------------ 00:14:20.948 Flush (00h): Supported LBA-Change 00:14:20.948 Write (01h): Supported LBA-Change 00:14:20.948 Read (02h): Supported 00:14:20.948 Compare (05h): Supported 00:14:20.948 Write Zeroes (08h): Supported LBA-Change 00:14:20.948 Dataset Management (09h): Supported LBA-Change 00:14:20.948 Unknown (0Ch): Supported 00:14:20.948 Unknown (12h): Supported 00:14:20.948 Copy (19h): Supported LBA-Change 00:14:20.948 Unknown (1Dh): Supported LBA-Change 00:14:20.948 00:14:20.948 Error Log 00:14:20.948 ========= 00:14:20.948 00:14:20.948 Arbitration 00:14:20.948 =========== 00:14:20.948 Arbitration Burst: no limit 00:14:20.948 00:14:20.948 Power Management 00:14:20.948 ================ 00:14:20.948 Number of Power States: 1 00:14:20.948 Current Power State: Power State #0 00:14:20.948 Power State #0: 00:14:20.948 Max Power: 25.00 W 00:14:20.948 Non-Operational State: Operational 00:14:20.948 Entry Latency: 16 microseconds 00:14:20.948 Exit Latency: 4 microseconds 00:14:20.948 Relative Read Throughput: 0 00:14:20.948 Relative Read Latency: 0 00:14:20.948 Relative Write Throughput: 0 00:14:20.948 Relative Write Latency: 0 00:14:20.948 Idle Power: Not Reported 00:14:20.948 Active Power: Not Reported 00:14:20.948 Non-Operational Permissive Mode: Not Supported 00:14:20.948 00:14:20.948 Health Information 00:14:20.948 ================== 00:14:20.948 Critical Warnings: 00:14:20.948 Available Spare Space: OK 00:14:20.948 Temperature: OK 00:14:20.948 Device Reliability: OK 00:14:20.948 Read Only: No 00:14:20.948 Volatile Memory Backup: OK 00:14:20.948 Current Temperature: 323 Kelvin (50 Celsius) 00:14:20.948 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:20.948 Available Spare: 0% 00:14:20.948 Available Spare Threshold: 0% 00:14:20.948 Life Percentage Used: 0% 00:14:20.948 Data Units Read: 709 00:14:20.948 Data Units Written: 637 00:14:20.948 Host Read Commands: 32435 00:14:20.948 Host Write Commands: 32221 00:14:20.948 Controller Busy Time: 0 minutes 00:14:20.948 Power Cycles: 0 00:14:20.948 Power On Hours: 0 hours 00:14:20.948 Unsafe Shutdowns: 0 00:14:20.948 Unrecoverable Media Errors: 0 00:14:20.948 Lifetime Error Log Entries: 0 00:14:20.948 Warning Temperature Time: 0 minutes 00:14:20.948 Critical Temperature Time: 0 minutes 00:14:20.948 00:14:20.948 Number of Queues 00:14:20.948 ================ 00:14:20.948 Number of I/O Submission Queues: 64 00:14:20.948 Number of I/O Completion Queues: 64 00:14:20.948 00:14:20.948 ZNS Specific Controller Data 00:14:20.948 ============================ 00:14:20.948 Zone Append Size Limit: 0 00:14:20.948 00:14:20.948 00:14:20.948 Active Namespaces 00:14:20.948 ================= 00:14:20.948 Namespace ID:1 00:14:20.948 Error Recovery Timeout: Unlimited 00:14:20.948 Command Set Identifier: NVM (00h) 00:14:20.948 Deallocate: Supported 00:14:20.948 Deallocated/Unwritten Error: Supported 00:14:20.948 Deallocated Read Value: All 0x00 00:14:20.948 Deallocate in Write Zeroes: Not Supported 00:14:20.948 Deallocated Guard Field: 0xFFFF 00:14:20.948 Flush: Supported 00:14:20.948 Reservation: Not Supported 00:14:20.948 Metadata Transferred as: Separate Metadata Buffer 00:14:20.948 Namespace Sharing Capabilities: Private 00:14:20.948 Size (in LBAs): 1548666 (5GiB) 00:14:20.948 Capacity (in LBAs): 1548666 (5GiB) 00:14:20.948 Utilization (in LBAs): 1548666 (5GiB) 00:14:20.948 Thin Provisioning: Not Supported 00:14:20.948 Per-NS Atomic Units: No 00:14:20.948 Maximum Single Source Range Length: 128 00:14:20.948 Maximum Copy Length: 128 00:14:20.948 Maximum Source Range Count: 128 00:14:20.948 NGUID/EUI64 Never Reused: No 00:14:20.948 Namespace Write Protected: No 00:14:20.948 Number of LBA Formats: 8 00:14:20.948 Current LBA Format: LBA Format #07 00:14:20.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:20.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:20.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:20.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:20.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:20.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:20.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:20.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:20.948 00:14:20.948 NVM Specific Namespace Data 00:14:20.948 =========================== 00:14:20.948 Logical Block Storage Tag Mask: 0 00:14:20.948 Protection Information Capabilities: 00:14:20.948 16b Guard Protection Information Storage Tag Support: No 00:14:20.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:20.948 Storage Tag Check Read Support: No 00:14:20.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:20.948 15:26:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:20.948 15:26:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:14:21.208 ===================================================== 00:14:21.208 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:21.208 ===================================================== 00:14:21.208 Controller Capabilities/Features 00:14:21.208 ================================ 00:14:21.208 Vendor ID: 1b36 00:14:21.208 Subsystem Vendor ID: 1af4 00:14:21.208 Serial Number: 12341 00:14:21.208 Model Number: QEMU NVMe Ctrl 00:14:21.208 Firmware Version: 8.0.0 00:14:21.208 Recommended Arb Burst: 6 00:14:21.208 IEEE OUI Identifier: 00 54 52 00:14:21.208 Multi-path I/O 00:14:21.208 May have multiple subsystem ports: No 00:14:21.208 May have multiple controllers: No 00:14:21.208 Associated with SR-IOV VF: No 00:14:21.208 Max Data Transfer Size: 524288 00:14:21.208 Max Number of Namespaces: 256 00:14:21.208 Max Number of I/O Queues: 64 00:14:21.208 NVMe Specification Version (VS): 1.4 00:14:21.208 NVMe Specification Version (Identify): 1.4 00:14:21.208 Maximum Queue Entries: 2048 00:14:21.208 Contiguous Queues Required: Yes 00:14:21.208 Arbitration Mechanisms Supported 00:14:21.208 Weighted Round Robin: Not Supported 00:14:21.208 Vendor Specific: Not Supported 00:14:21.208 Reset Timeout: 7500 ms 00:14:21.208 Doorbell Stride: 4 bytes 00:14:21.208 NVM Subsystem Reset: Not Supported 00:14:21.208 Command Sets Supported 00:14:21.208 NVM Command Set: Supported 00:14:21.208 Boot Partition: Not Supported 00:14:21.208 Memory Page Size Minimum: 4096 bytes 00:14:21.208 Memory Page Size Maximum: 65536 bytes 00:14:21.208 Persistent Memory Region: Not Supported 00:14:21.208 Optional Asynchronous Events Supported 00:14:21.208 Namespace Attribute Notices: Supported 00:14:21.208 Firmware Activation Notices: Not Supported 00:14:21.208 ANA Change Notices: Not Supported 00:14:21.208 PLE Aggregate Log Change Notices: Not Supported 00:14:21.208 LBA Status Info Alert Notices: Not Supported 00:14:21.208 EGE Aggregate Log Change Notices: Not Supported 00:14:21.208 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.208 Zone Descriptor Change Notices: Not Supported 00:14:21.208 Discovery Log Change Notices: Not Supported 00:14:21.208 Controller Attributes 00:14:21.208 128-bit Host Identifier: Not Supported 00:14:21.208 Non-Operational Permissive Mode: Not Supported 00:14:21.208 NVM Sets: Not Supported 00:14:21.208 Read Recovery Levels: Not Supported 00:14:21.208 Endurance Groups: Not Supported 00:14:21.208 Predictable Latency Mode: Not Supported 00:14:21.208 Traffic Based Keep ALive: Not Supported 00:14:21.208 Namespace Granularity: Not Supported 00:14:21.208 SQ Associations: Not Supported 00:14:21.208 UUID List: Not Supported 00:14:21.208 Multi-Domain Subsystem: Not Supported 00:14:21.208 Fixed Capacity Management: Not Supported 00:14:21.208 Variable Capacity Management: Not Supported 00:14:21.208 Delete Endurance Group: Not Supported 00:14:21.208 Delete NVM Set: Not Supported 00:14:21.208 Extended LBA Formats Supported: Supported 00:14:21.208 Flexible Data Placement Supported: Not Supported 00:14:21.208 00:14:21.208 Controller Memory Buffer Support 00:14:21.208 ================================ 00:14:21.208 Supported: No 00:14:21.208 00:14:21.208 Persistent Memory Region Support 00:14:21.208 ================================ 00:14:21.208 Supported: No 00:14:21.208 00:14:21.208 Admin Command Set Attributes 00:14:21.208 ============================ 00:14:21.208 Security Send/Receive: Not Supported 00:14:21.208 Format NVM: Supported 00:14:21.208 Firmware Activate/Download: Not Supported 00:14:21.208 Namespace Management: Supported 00:14:21.208 Device Self-Test: Not Supported 00:14:21.208 Directives: Supported 00:14:21.208 NVMe-MI: Not Supported 00:14:21.208 Virtualization Management: Not Supported 00:14:21.208 Doorbell Buffer Config: Supported 00:14:21.208 Get LBA Status Capability: Not Supported 00:14:21.208 Command & Feature Lockdown Capability: Not Supported 00:14:21.208 Abort Command Limit: 4 00:14:21.208 Async Event Request Limit: 4 00:14:21.208 Number of Firmware Slots: N/A 00:14:21.208 Firmware Slot 1 Read-Only: N/A 00:14:21.208 Firmware Activation Without Reset: N/A 00:14:21.208 Multiple Update Detection Support: N/A 00:14:21.208 Firmware Update Granularity: No Information Provided 00:14:21.208 Per-Namespace SMART Log: Yes 00:14:21.208 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.208 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:21.208 Command Effects Log Page: Supported 00:14:21.208 Get Log Page Extended Data: Supported 00:14:21.208 Telemetry Log Pages: Not Supported 00:14:21.208 Persistent Event Log Pages: Not Supported 00:14:21.208 Supported Log Pages Log Page: May Support 00:14:21.208 Commands Supported & Effects Log Page: Not Supported 00:14:21.208 Feature Identifiers & Effects Log Page:May Support 00:14:21.208 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.208 Data Area 4 for Telemetry Log: Not Supported 00:14:21.208 Error Log Page Entries Supported: 1 00:14:21.208 Keep Alive: Not Supported 00:14:21.208 00:14:21.208 NVM Command Set Attributes 00:14:21.208 ========================== 00:14:21.208 Submission Queue Entry Size 00:14:21.208 Max: 64 00:14:21.208 Min: 64 00:14:21.208 Completion Queue Entry Size 00:14:21.208 Max: 16 00:14:21.208 Min: 16 00:14:21.208 Number of Namespaces: 256 00:14:21.208 Compare Command: Supported 00:14:21.208 Write Uncorrectable Command: Not Supported 00:14:21.208 Dataset Management Command: Supported 00:14:21.208 Write Zeroes Command: Supported 00:14:21.208 Set Features Save Field: Supported 00:14:21.208 Reservations: Not Supported 00:14:21.208 Timestamp: Supported 00:14:21.208 Copy: Supported 00:14:21.208 Volatile Write Cache: Present 00:14:21.208 Atomic Write Unit (Normal): 1 00:14:21.208 Atomic Write Unit (PFail): 1 00:14:21.208 Atomic Compare & Write Unit: 1 00:14:21.208 Fused Compare & Write: Not Supported 00:14:21.208 Scatter-Gather List 00:14:21.208 SGL Command Set: Supported 00:14:21.208 SGL Keyed: Not Supported 00:14:21.208 SGL Bit Bucket Descriptor: Not Supported 00:14:21.208 SGL Metadata Pointer: Not Supported 00:14:21.209 Oversized SGL: Not Supported 00:14:21.209 SGL Metadata Address: Not Supported 00:14:21.209 SGL Offset: Not Supported 00:14:21.209 Transport SGL Data Block: Not Supported 00:14:21.209 Replay Protected Memory Block: Not Supported 00:14:21.209 00:14:21.209 Firmware Slot Information 00:14:21.209 ========================= 00:14:21.209 Active slot: 1 00:14:21.209 Slot 1 Firmware Revision: 1.0 00:14:21.209 00:14:21.209 00:14:21.209 Commands Supported and Effects 00:14:21.209 ============================== 00:14:21.209 Admin Commands 00:14:21.209 -------------- 00:14:21.209 Delete I/O Submission Queue (00h): Supported 00:14:21.209 Create I/O Submission Queue (01h): Supported 00:14:21.209 Get Log Page (02h): Supported 00:14:21.209 Delete I/O Completion Queue (04h): Supported 00:14:21.209 Create I/O Completion Queue (05h): Supported 00:14:21.209 Identify (06h): Supported 00:14:21.209 Abort (08h): Supported 00:14:21.209 Set Features (09h): Supported 00:14:21.209 Get Features (0Ah): Supported 00:14:21.209 Asynchronous Event Request (0Ch): Supported 00:14:21.209 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:21.209 Directive Send (19h): Supported 00:14:21.209 Directive Receive (1Ah): Supported 00:14:21.209 Virtualization Management (1Ch): Supported 00:14:21.209 Doorbell Buffer Config (7Ch): Supported 00:14:21.209 Format NVM (80h): Supported LBA-Change 00:14:21.209 I/O Commands 00:14:21.209 ------------ 00:14:21.209 Flush (00h): Supported LBA-Change 00:14:21.209 Write (01h): Supported LBA-Change 00:14:21.209 Read (02h): Supported 00:14:21.209 Compare (05h): Supported 00:14:21.209 Write Zeroes (08h): Supported LBA-Change 00:14:21.209 Dataset Management (09h): Supported LBA-Change 00:14:21.209 Unknown (0Ch): Supported 00:14:21.209 Unknown (12h): Supported 00:14:21.209 Copy (19h): Supported LBA-Change 00:14:21.209 Unknown (1Dh): Supported LBA-Change 00:14:21.209 00:14:21.209 Error Log 00:14:21.209 ========= 00:14:21.209 00:14:21.209 Arbitration 00:14:21.209 =========== 00:14:21.209 Arbitration Burst: no limit 00:14:21.209 00:14:21.209 Power Management 00:14:21.209 ================ 00:14:21.209 Number of Power States: 1 00:14:21.209 Current Power State: Power State #0 00:14:21.209 Power State #0: 00:14:21.209 Max Power: 25.00 W 00:14:21.209 Non-Operational State: Operational 00:14:21.209 Entry Latency: 16 microseconds 00:14:21.209 Exit Latency: 4 microseconds 00:14:21.209 Relative Read Throughput: 0 00:14:21.209 Relative Read Latency: 0 00:14:21.209 Relative Write Throughput: 0 00:14:21.209 Relative Write Latency: 0 00:14:21.209 Idle Power: Not Reported 00:14:21.209 Active Power: Not Reported 00:14:21.209 Non-Operational Permissive Mode: Not Supported 00:14:21.209 00:14:21.209 Health Information 00:14:21.209 ================== 00:14:21.209 Critical Warnings: 00:14:21.209 Available Spare Space: OK 00:14:21.209 Temperature: OK 00:14:21.209 Device Reliability: OK 00:14:21.209 Read Only: No 00:14:21.209 Volatile Memory Backup: OK 00:14:21.209 Current Temperature: 323 Kelvin (50 Celsius) 00:14:21.209 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:21.209 Available Spare: 0% 00:14:21.209 Available Spare Threshold: 0% 00:14:21.209 Life Percentage Used: 0% 00:14:21.209 Data Units Read: 1073 00:14:21.209 Data Units Written: 946 00:14:21.209 Host Read Commands: 47987 00:14:21.209 Host Write Commands: 46879 00:14:21.209 Controller Busy Time: 0 minutes 00:14:21.209 Power Cycles: 0 00:14:21.209 Power On Hours: 0 hours 00:14:21.209 Unsafe Shutdowns: 0 00:14:21.209 Unrecoverable Media Errors: 0 00:14:21.209 Lifetime Error Log Entries: 0 00:14:21.209 Warning Temperature Time: 0 minutes 00:14:21.209 Critical Temperature Time: 0 minutes 00:14:21.209 00:14:21.209 Number of Queues 00:14:21.209 ================ 00:14:21.209 Number of I/O Submission Queues: 64 00:14:21.209 Number of I/O Completion Queues: 64 00:14:21.209 00:14:21.209 ZNS Specific Controller Data 00:14:21.209 ============================ 00:14:21.209 Zone Append Size Limit: 0 00:14:21.209 00:14:21.209 00:14:21.209 Active Namespaces 00:14:21.209 ================= 00:14:21.209 Namespace ID:1 00:14:21.209 Error Recovery Timeout: Unlimited 00:14:21.209 Command Set Identifier: NVM (00h) 00:14:21.209 Deallocate: Supported 00:14:21.209 Deallocated/Unwritten Error: Supported 00:14:21.209 Deallocated Read Value: All 0x00 00:14:21.209 Deallocate in Write Zeroes: Not Supported 00:14:21.209 Deallocated Guard Field: 0xFFFF 00:14:21.209 Flush: Supported 00:14:21.209 Reservation: Not Supported 00:14:21.209 Namespace Sharing Capabilities: Private 00:14:21.209 Size (in LBAs): 1310720 (5GiB) 00:14:21.209 Capacity (in LBAs): 1310720 (5GiB) 00:14:21.209 Utilization (in LBAs): 1310720 (5GiB) 00:14:21.209 Thin Provisioning: Not Supported 00:14:21.209 Per-NS Atomic Units: No 00:14:21.209 Maximum Single Source Range Length: 128 00:14:21.209 Maximum Copy Length: 128 00:14:21.209 Maximum Source Range Count: 128 00:14:21.209 NGUID/EUI64 Never Reused: No 00:14:21.209 Namespace Write Protected: No 00:14:21.209 Number of LBA Formats: 8 00:14:21.209 Current LBA Format: LBA Format #04 00:14:21.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.209 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:21.209 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:21.209 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:21.209 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:21.209 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:21.209 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:21.209 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:21.209 00:14:21.209 NVM Specific Namespace Data 00:14:21.209 =========================== 00:14:21.209 Logical Block Storage Tag Mask: 0 00:14:21.209 Protection Information Capabilities: 00:14:21.209 16b Guard Protection Information Storage Tag Support: No 00:14:21.209 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:21.209 Storage Tag Check Read Support: No 00:14:21.209 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.209 15:26:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:21.209 15:26:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:14:21.779 ===================================================== 00:14:21.779 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:21.779 ===================================================== 00:14:21.779 Controller Capabilities/Features 00:14:21.779 ================================ 00:14:21.779 Vendor ID: 1b36 00:14:21.779 Subsystem Vendor ID: 1af4 00:14:21.779 Serial Number: 12342 00:14:21.779 Model Number: QEMU NVMe Ctrl 00:14:21.779 Firmware Version: 8.0.0 00:14:21.779 Recommended Arb Burst: 6 00:14:21.779 IEEE OUI Identifier: 00 54 52 00:14:21.779 Multi-path I/O 00:14:21.779 May have multiple subsystem ports: No 00:14:21.779 May have multiple controllers: No 00:14:21.779 Associated with SR-IOV VF: No 00:14:21.779 Max Data Transfer Size: 524288 00:14:21.779 Max Number of Namespaces: 256 00:14:21.779 Max Number of I/O Queues: 64 00:14:21.779 NVMe Specification Version (VS): 1.4 00:14:21.779 NVMe Specification Version (Identify): 1.4 00:14:21.779 Maximum Queue Entries: 2048 00:14:21.779 Contiguous Queues Required: Yes 00:14:21.779 Arbitration Mechanisms Supported 00:14:21.779 Weighted Round Robin: Not Supported 00:14:21.779 Vendor Specific: Not Supported 00:14:21.779 Reset Timeout: 7500 ms 00:14:21.779 Doorbell Stride: 4 bytes 00:14:21.779 NVM Subsystem Reset: Not Supported 00:14:21.779 Command Sets Supported 00:14:21.779 NVM Command Set: Supported 00:14:21.779 Boot Partition: Not Supported 00:14:21.779 Memory Page Size Minimum: 4096 bytes 00:14:21.779 Memory Page Size Maximum: 65536 bytes 00:14:21.779 Persistent Memory Region: Not Supported 00:14:21.779 Optional Asynchronous Events Supported 00:14:21.779 Namespace Attribute Notices: Supported 00:14:21.779 Firmware Activation Notices: Not Supported 00:14:21.779 ANA Change Notices: Not Supported 00:14:21.779 PLE Aggregate Log Change Notices: Not Supported 00:14:21.779 LBA Status Info Alert Notices: Not Supported 00:14:21.779 EGE Aggregate Log Change Notices: Not Supported 00:14:21.779 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.779 Zone Descriptor Change Notices: Not Supported 00:14:21.779 Discovery Log Change Notices: Not Supported 00:14:21.779 Controller Attributes 00:14:21.779 128-bit Host Identifier: Not Supported 00:14:21.779 Non-Operational Permissive Mode: Not Supported 00:14:21.779 NVM Sets: Not Supported 00:14:21.779 Read Recovery Levels: Not Supported 00:14:21.779 Endurance Groups: Not Supported 00:14:21.779 Predictable Latency Mode: Not Supported 00:14:21.779 Traffic Based Keep ALive: Not Supported 00:14:21.780 Namespace Granularity: Not Supported 00:14:21.780 SQ Associations: Not Supported 00:14:21.780 UUID List: Not Supported 00:14:21.780 Multi-Domain Subsystem: Not Supported 00:14:21.780 Fixed Capacity Management: Not Supported 00:14:21.780 Variable Capacity Management: Not Supported 00:14:21.780 Delete Endurance Group: Not Supported 00:14:21.780 Delete NVM Set: Not Supported 00:14:21.780 Extended LBA Formats Supported: Supported 00:14:21.780 Flexible Data Placement Supported: Not Supported 00:14:21.780 00:14:21.780 Controller Memory Buffer Support 00:14:21.780 ================================ 00:14:21.780 Supported: No 00:14:21.780 00:14:21.780 Persistent Memory Region Support 00:14:21.780 ================================ 00:14:21.780 Supported: No 00:14:21.780 00:14:21.780 Admin Command Set Attributes 00:14:21.780 ============================ 00:14:21.780 Security Send/Receive: Not Supported 00:14:21.780 Format NVM: Supported 00:14:21.780 Firmware Activate/Download: Not Supported 00:14:21.780 Namespace Management: Supported 00:14:21.780 Device Self-Test: Not Supported 00:14:21.780 Directives: Supported 00:14:21.780 NVMe-MI: Not Supported 00:14:21.780 Virtualization Management: Not Supported 00:14:21.780 Doorbell Buffer Config: Supported 00:14:21.780 Get LBA Status Capability: Not Supported 00:14:21.780 Command & Feature Lockdown Capability: Not Supported 00:14:21.780 Abort Command Limit: 4 00:14:21.780 Async Event Request Limit: 4 00:14:21.780 Number of Firmware Slots: N/A 00:14:21.780 Firmware Slot 1 Read-Only: N/A 00:14:21.780 Firmware Activation Without Reset: N/A 00:14:21.780 Multiple Update Detection Support: N/A 00:14:21.780 Firmware Update Granularity: No Information Provided 00:14:21.780 Per-Namespace SMART Log: Yes 00:14:21.780 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.780 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:21.780 Command Effects Log Page: Supported 00:14:21.780 Get Log Page Extended Data: Supported 00:14:21.780 Telemetry Log Pages: Not Supported 00:14:21.780 Persistent Event Log Pages: Not Supported 00:14:21.780 Supported Log Pages Log Page: May Support 00:14:21.780 Commands Supported & Effects Log Page: Not Supported 00:14:21.780 Feature Identifiers & Effects Log Page:May Support 00:14:21.780 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.780 Data Area 4 for Telemetry Log: Not Supported 00:14:21.780 Error Log Page Entries Supported: 1 00:14:21.780 Keep Alive: Not Supported 00:14:21.780 00:14:21.780 NVM Command Set Attributes 00:14:21.780 ========================== 00:14:21.780 Submission Queue Entry Size 00:14:21.780 Max: 64 00:14:21.780 Min: 64 00:14:21.780 Completion Queue Entry Size 00:14:21.780 Max: 16 00:14:21.780 Min: 16 00:14:21.780 Number of Namespaces: 256 00:14:21.780 Compare Command: Supported 00:14:21.780 Write Uncorrectable Command: Not Supported 00:14:21.780 Dataset Management Command: Supported 00:14:21.780 Write Zeroes Command: Supported 00:14:21.780 Set Features Save Field: Supported 00:14:21.780 Reservations: Not Supported 00:14:21.780 Timestamp: Supported 00:14:21.780 Copy: Supported 00:14:21.780 Volatile Write Cache: Present 00:14:21.780 Atomic Write Unit (Normal): 1 00:14:21.780 Atomic Write Unit (PFail): 1 00:14:21.780 Atomic Compare & Write Unit: 1 00:14:21.780 Fused Compare & Write: Not Supported 00:14:21.780 Scatter-Gather List 00:14:21.780 SGL Command Set: Supported 00:14:21.780 SGL Keyed: Not Supported 00:14:21.780 SGL Bit Bucket Descriptor: Not Supported 00:14:21.780 SGL Metadata Pointer: Not Supported 00:14:21.780 Oversized SGL: Not Supported 00:14:21.780 SGL Metadata Address: Not Supported 00:14:21.780 SGL Offset: Not Supported 00:14:21.780 Transport SGL Data Block: Not Supported 00:14:21.780 Replay Protected Memory Block: Not Supported 00:14:21.780 00:14:21.780 Firmware Slot Information 00:14:21.780 ========================= 00:14:21.780 Active slot: 1 00:14:21.780 Slot 1 Firmware Revision: 1.0 00:14:21.780 00:14:21.780 00:14:21.780 Commands Supported and Effects 00:14:21.780 ============================== 00:14:21.780 Admin Commands 00:14:21.780 -------------- 00:14:21.780 Delete I/O Submission Queue (00h): Supported 00:14:21.780 Create I/O Submission Queue (01h): Supported 00:14:21.780 Get Log Page (02h): Supported 00:14:21.780 Delete I/O Completion Queue (04h): Supported 00:14:21.780 Create I/O Completion Queue (05h): Supported 00:14:21.780 Identify (06h): Supported 00:14:21.780 Abort (08h): Supported 00:14:21.780 Set Features (09h): Supported 00:14:21.780 Get Features (0Ah): Supported 00:14:21.780 Asynchronous Event Request (0Ch): Supported 00:14:21.780 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:21.780 Directive Send (19h): Supported 00:14:21.780 Directive Receive (1Ah): Supported 00:14:21.780 Virtualization Management (1Ch): Supported 00:14:21.780 Doorbell Buffer Config (7Ch): Supported 00:14:21.780 Format NVM (80h): Supported LBA-Change 00:14:21.780 I/O Commands 00:14:21.780 ------------ 00:14:21.780 Flush (00h): Supported LBA-Change 00:14:21.780 Write (01h): Supported LBA-Change 00:14:21.780 Read (02h): Supported 00:14:21.780 Compare (05h): Supported 00:14:21.780 Write Zeroes (08h): Supported LBA-Change 00:14:21.780 Dataset Management (09h): Supported LBA-Change 00:14:21.780 Unknown (0Ch): Supported 00:14:21.780 Unknown (12h): Supported 00:14:21.780 Copy (19h): Supported LBA-Change 00:14:21.780 Unknown (1Dh): Supported LBA-Change 00:14:21.780 00:14:21.780 Error Log 00:14:21.780 ========= 00:14:21.780 00:14:21.780 Arbitration 00:14:21.780 =========== 00:14:21.780 Arbitration Burst: no limit 00:14:21.780 00:14:21.780 Power Management 00:14:21.780 ================ 00:14:21.780 Number of Power States: 1 00:14:21.780 Current Power State: Power State #0 00:14:21.780 Power State #0: 00:14:21.780 Max Power: 25.00 W 00:14:21.780 Non-Operational State: Operational 00:14:21.780 Entry Latency: 16 microseconds 00:14:21.780 Exit Latency: 4 microseconds 00:14:21.780 Relative Read Throughput: 0 00:14:21.780 Relative Read Latency: 0 00:14:21.780 Relative Write Throughput: 0 00:14:21.780 Relative Write Latency: 0 00:14:21.780 Idle Power: Not Reported 00:14:21.780 Active Power: Not Reported 00:14:21.780 Non-Operational Permissive Mode: Not Supported 00:14:21.780 00:14:21.780 Health Information 00:14:21.780 ================== 00:14:21.780 Critical Warnings: 00:14:21.780 Available Spare Space: OK 00:14:21.780 Temperature: OK 00:14:21.780 Device Reliability: OK 00:14:21.780 Read Only: No 00:14:21.780 Volatile Memory Backup: OK 00:14:21.780 Current Temperature: 323 Kelvin (50 Celsius) 00:14:21.780 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:21.780 Available Spare: 0% 00:14:21.780 Available Spare Threshold: 0% 00:14:21.780 Life Percentage Used: 0% 00:14:21.780 Data Units Read: 2246 00:14:21.780 Data Units Written: 2033 00:14:21.780 Host Read Commands: 98607 00:14:21.780 Host Write Commands: 96878 00:14:21.780 Controller Busy Time: 0 minutes 00:14:21.780 Power Cycles: 0 00:14:21.780 Power On Hours: 0 hours 00:14:21.780 Unsafe Shutdowns: 0 00:14:21.780 Unrecoverable Media Errors: 0 00:14:21.780 Lifetime Error Log Entries: 0 00:14:21.780 Warning Temperature Time: 0 minutes 00:14:21.780 Critical Temperature Time: 0 minutes 00:14:21.780 00:14:21.780 Number of Queues 00:14:21.780 ================ 00:14:21.780 Number of I/O Submission Queues: 64 00:14:21.780 Number of I/O Completion Queues: 64 00:14:21.780 00:14:21.780 ZNS Specific Controller Data 00:14:21.780 ============================ 00:14:21.780 Zone Append Size Limit: 0 00:14:21.780 00:14:21.780 00:14:21.780 Active Namespaces 00:14:21.780 ================= 00:14:21.780 Namespace ID:1 00:14:21.780 Error Recovery Timeout: Unlimited 00:14:21.780 Command Set Identifier: NVM (00h) 00:14:21.780 Deallocate: Supported 00:14:21.780 Deallocated/Unwritten Error: Supported 00:14:21.780 Deallocated Read Value: All 0x00 00:14:21.780 Deallocate in Write Zeroes: Not Supported 00:14:21.780 Deallocated Guard Field: 0xFFFF 00:14:21.780 Flush: Supported 00:14:21.780 Reservation: Not Supported 00:14:21.780 Namespace Sharing Capabilities: Private 00:14:21.780 Size (in LBAs): 1048576 (4GiB) 00:14:21.780 Capacity (in LBAs): 1048576 (4GiB) 00:14:21.780 Utilization (in LBAs): 1048576 (4GiB) 00:14:21.780 Thin Provisioning: Not Supported 00:14:21.780 Per-NS Atomic Units: No 00:14:21.780 Maximum Single Source Range Length: 128 00:14:21.780 Maximum Copy Length: 128 00:14:21.781 Maximum Source Range Count: 128 00:14:21.781 NGUID/EUI64 Never Reused: No 00:14:21.781 Namespace Write Protected: No 00:14:21.781 Number of LBA Formats: 8 00:14:21.781 Current LBA Format: LBA Format #04 00:14:21.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:21.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:21.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:21.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:21.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:21.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:21.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:21.781 00:14:21.781 NVM Specific Namespace Data 00:14:21.781 =========================== 00:14:21.781 Logical Block Storage Tag Mask: 0 00:14:21.781 Protection Information Capabilities: 00:14:21.781 16b Guard Protection Information Storage Tag Support: No 00:14:21.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:21.781 Storage Tag Check Read Support: No 00:14:21.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Namespace ID:2 00:14:21.781 Error Recovery Timeout: Unlimited 00:14:21.781 Command Set Identifier: NVM (00h) 00:14:21.781 Deallocate: Supported 00:14:21.781 Deallocated/Unwritten Error: Supported 00:14:21.781 Deallocated Read Value: All 0x00 00:14:21.781 Deallocate in Write Zeroes: Not Supported 00:14:21.781 Deallocated Guard Field: 0xFFFF 00:14:21.781 Flush: Supported 00:14:21.781 Reservation: Not Supported 00:14:21.781 Namespace Sharing Capabilities: Private 00:14:21.781 Size (in LBAs): 1048576 (4GiB) 00:14:21.781 Capacity (in LBAs): 1048576 (4GiB) 00:14:21.781 Utilization (in LBAs): 1048576 (4GiB) 00:14:21.781 Thin Provisioning: Not Supported 00:14:21.781 Per-NS Atomic Units: No 00:14:21.781 Maximum Single Source Range Length: 128 00:14:21.781 Maximum Copy Length: 128 00:14:21.781 Maximum Source Range Count: 128 00:14:21.781 NGUID/EUI64 Never Reused: No 00:14:21.781 Namespace Write Protected: No 00:14:21.781 Number of LBA Formats: 8 00:14:21.781 Current LBA Format: LBA Format #04 00:14:21.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:21.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:21.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:21.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:21.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:21.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:21.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:21.781 00:14:21.781 NVM Specific Namespace Data 00:14:21.781 =========================== 00:14:21.781 Logical Block Storage Tag Mask: 0 00:14:21.781 Protection Information Capabilities: 00:14:21.781 16b Guard Protection Information Storage Tag Support: No 00:14:21.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:21.781 Storage Tag Check Read Support: No 00:14:21.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Namespace ID:3 00:14:21.781 Error Recovery Timeout: Unlimited 00:14:21.781 Command Set Identifier: NVM (00h) 00:14:21.781 Deallocate: Supported 00:14:21.781 Deallocated/Unwritten Error: Supported 00:14:21.781 Deallocated Read Value: All 0x00 00:14:21.781 Deallocate in Write Zeroes: Not Supported 00:14:21.781 Deallocated Guard Field: 0xFFFF 00:14:21.781 Flush: Supported 00:14:21.781 Reservation: Not Supported 00:14:21.781 Namespace Sharing Capabilities: Private 00:14:21.781 Size (in LBAs): 1048576 (4GiB) 00:14:21.781 Capacity (in LBAs): 1048576 (4GiB) 00:14:21.781 Utilization (in LBAs): 1048576 (4GiB) 00:14:21.781 Thin Provisioning: Not Supported 00:14:21.781 Per-NS Atomic Units: No 00:14:21.781 Maximum Single Source Range Length: 128 00:14:21.781 Maximum Copy Length: 128 00:14:21.781 Maximum Source Range Count: 128 00:14:21.781 NGUID/EUI64 Never Reused: No 00:14:21.781 Namespace Write Protected: No 00:14:21.781 Number of LBA Formats: 8 00:14:21.781 Current LBA Format: LBA Format #04 00:14:21.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:21.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:21.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:21.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:21.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:21.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:21.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:21.781 00:14:21.781 NVM Specific Namespace Data 00:14:21.781 =========================== 00:14:21.781 Logical Block Storage Tag Mask: 0 00:14:21.781 Protection Information Capabilities: 00:14:21.781 16b Guard Protection Information Storage Tag Support: No 00:14:21.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:21.781 Storage Tag Check Read Support: No 00:14:21.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:21.781 15:26:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:21.781 15:26:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:14:22.040 ===================================================== 00:14:22.041 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:22.041 ===================================================== 00:14:22.041 Controller Capabilities/Features 00:14:22.041 ================================ 00:14:22.041 Vendor ID: 1b36 00:14:22.041 Subsystem Vendor ID: 1af4 00:14:22.041 Serial Number: 12343 00:14:22.041 Model Number: QEMU NVMe Ctrl 00:14:22.041 Firmware Version: 8.0.0 00:14:22.041 Recommended Arb Burst: 6 00:14:22.041 IEEE OUI Identifier: 00 54 52 00:14:22.041 Multi-path I/O 00:14:22.041 May have multiple subsystem ports: No 00:14:22.041 May have multiple controllers: Yes 00:14:22.041 Associated with SR-IOV VF: No 00:14:22.041 Max Data Transfer Size: 524288 00:14:22.041 Max Number of Namespaces: 256 00:14:22.041 Max Number of I/O Queues: 64 00:14:22.041 NVMe Specification Version (VS): 1.4 00:14:22.041 NVMe Specification Version (Identify): 1.4 00:14:22.041 Maximum Queue Entries: 2048 00:14:22.041 Contiguous Queues Required: Yes 00:14:22.041 Arbitration Mechanisms Supported 00:14:22.041 Weighted Round Robin: Not Supported 00:14:22.041 Vendor Specific: Not Supported 00:14:22.041 Reset Timeout: 7500 ms 00:14:22.041 Doorbell Stride: 4 bytes 00:14:22.041 NVM Subsystem Reset: Not Supported 00:14:22.041 Command Sets Supported 00:14:22.041 NVM Command Set: Supported 00:14:22.041 Boot Partition: Not Supported 00:14:22.041 Memory Page Size Minimum: 4096 bytes 00:14:22.041 Memory Page Size Maximum: 65536 bytes 00:14:22.041 Persistent Memory Region: Not Supported 00:14:22.041 Optional Asynchronous Events Supported 00:14:22.041 Namespace Attribute Notices: Supported 00:14:22.041 Firmware Activation Notices: Not Supported 00:14:22.041 ANA Change Notices: Not Supported 00:14:22.041 PLE Aggregate Log Change Notices: Not Supported 00:14:22.041 LBA Status Info Alert Notices: Not Supported 00:14:22.041 EGE Aggregate Log Change Notices: Not Supported 00:14:22.041 Normal NVM Subsystem Shutdown event: Not Supported 00:14:22.041 Zone Descriptor Change Notices: Not Supported 00:14:22.041 Discovery Log Change Notices: Not Supported 00:14:22.041 Controller Attributes 00:14:22.041 128-bit Host Identifier: Not Supported 00:14:22.041 Non-Operational Permissive Mode: Not Supported 00:14:22.041 NVM Sets: Not Supported 00:14:22.041 Read Recovery Levels: Not Supported 00:14:22.041 Endurance Groups: Supported 00:14:22.041 Predictable Latency Mode: Not Supported 00:14:22.041 Traffic Based Keep ALive: Not Supported 00:14:22.041 Namespace Granularity: Not Supported 00:14:22.041 SQ Associations: Not Supported 00:14:22.041 UUID List: Not Supported 00:14:22.041 Multi-Domain Subsystem: Not Supported 00:14:22.041 Fixed Capacity Management: Not Supported 00:14:22.041 Variable Capacity Management: Not Supported 00:14:22.041 Delete Endurance Group: Not Supported 00:14:22.041 Delete NVM Set: Not Supported 00:14:22.041 Extended LBA Formats Supported: Supported 00:14:22.041 Flexible Data Placement Supported: Supported 00:14:22.041 00:14:22.041 Controller Memory Buffer Support 00:14:22.041 ================================ 00:14:22.041 Supported: No 00:14:22.041 00:14:22.041 Persistent Memory Region Support 00:14:22.041 ================================ 00:14:22.041 Supported: No 00:14:22.041 00:14:22.041 Admin Command Set Attributes 00:14:22.041 ============================ 00:14:22.041 Security Send/Receive: Not Supported 00:14:22.041 Format NVM: Supported 00:14:22.041 Firmware Activate/Download: Not Supported 00:14:22.041 Namespace Management: Supported 00:14:22.041 Device Self-Test: Not Supported 00:14:22.041 Directives: Supported 00:14:22.041 NVMe-MI: Not Supported 00:14:22.041 Virtualization Management: Not Supported 00:14:22.041 Doorbell Buffer Config: Supported 00:14:22.041 Get LBA Status Capability: Not Supported 00:14:22.041 Command & Feature Lockdown Capability: Not Supported 00:14:22.041 Abort Command Limit: 4 00:14:22.041 Async Event Request Limit: 4 00:14:22.041 Number of Firmware Slots: N/A 00:14:22.041 Firmware Slot 1 Read-Only: N/A 00:14:22.041 Firmware Activation Without Reset: N/A 00:14:22.041 Multiple Update Detection Support: N/A 00:14:22.041 Firmware Update Granularity: No Information Provided 00:14:22.041 Per-Namespace SMART Log: Yes 00:14:22.041 Asymmetric Namespace Access Log Page: Not Supported 00:14:22.041 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:22.041 Command Effects Log Page: Supported 00:14:22.041 Get Log Page Extended Data: Supported 00:14:22.041 Telemetry Log Pages: Not Supported 00:14:22.041 Persistent Event Log Pages: Not Supported 00:14:22.041 Supported Log Pages Log Page: May Support 00:14:22.041 Commands Supported & Effects Log Page: Not Supported 00:14:22.041 Feature Identifiers & Effects Log Page:May Support 00:14:22.041 NVMe-MI Commands & Effects Log Page: May Support 00:14:22.041 Data Area 4 for Telemetry Log: Not Supported 00:14:22.041 Error Log Page Entries Supported: 1 00:14:22.041 Keep Alive: Not Supported 00:14:22.041 00:14:22.041 NVM Command Set Attributes 00:14:22.041 ========================== 00:14:22.041 Submission Queue Entry Size 00:14:22.041 Max: 64 00:14:22.041 Min: 64 00:14:22.041 Completion Queue Entry Size 00:14:22.041 Max: 16 00:14:22.041 Min: 16 00:14:22.041 Number of Namespaces: 256 00:14:22.041 Compare Command: Supported 00:14:22.041 Write Uncorrectable Command: Not Supported 00:14:22.041 Dataset Management Command: Supported 00:14:22.041 Write Zeroes Command: Supported 00:14:22.041 Set Features Save Field: Supported 00:14:22.041 Reservations: Not Supported 00:14:22.041 Timestamp: Supported 00:14:22.041 Copy: Supported 00:14:22.041 Volatile Write Cache: Present 00:14:22.041 Atomic Write Unit (Normal): 1 00:14:22.041 Atomic Write Unit (PFail): 1 00:14:22.041 Atomic Compare & Write Unit: 1 00:14:22.041 Fused Compare & Write: Not Supported 00:14:22.041 Scatter-Gather List 00:14:22.041 SGL Command Set: Supported 00:14:22.041 SGL Keyed: Not Supported 00:14:22.041 SGL Bit Bucket Descriptor: Not Supported 00:14:22.041 SGL Metadata Pointer: Not Supported 00:14:22.041 Oversized SGL: Not Supported 00:14:22.041 SGL Metadata Address: Not Supported 00:14:22.041 SGL Offset: Not Supported 00:14:22.041 Transport SGL Data Block: Not Supported 00:14:22.041 Replay Protected Memory Block: Not Supported 00:14:22.041 00:14:22.041 Firmware Slot Information 00:14:22.041 ========================= 00:14:22.041 Active slot: 1 00:14:22.041 Slot 1 Firmware Revision: 1.0 00:14:22.041 00:14:22.041 00:14:22.041 Commands Supported and Effects 00:14:22.041 ============================== 00:14:22.041 Admin Commands 00:14:22.041 -------------- 00:14:22.041 Delete I/O Submission Queue (00h): Supported 00:14:22.041 Create I/O Submission Queue (01h): Supported 00:14:22.041 Get Log Page (02h): Supported 00:14:22.041 Delete I/O Completion Queue (04h): Supported 00:14:22.041 Create I/O Completion Queue (05h): Supported 00:14:22.041 Identify (06h): Supported 00:14:22.041 Abort (08h): Supported 00:14:22.041 Set Features (09h): Supported 00:14:22.041 Get Features (0Ah): Supported 00:14:22.041 Asynchronous Event Request (0Ch): Supported 00:14:22.041 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:22.041 Directive Send (19h): Supported 00:14:22.041 Directive Receive (1Ah): Supported 00:14:22.041 Virtualization Management (1Ch): Supported 00:14:22.041 Doorbell Buffer Config (7Ch): Supported 00:14:22.041 Format NVM (80h): Supported LBA-Change 00:14:22.041 I/O Commands 00:14:22.041 ------------ 00:14:22.041 Flush (00h): Supported LBA-Change 00:14:22.041 Write (01h): Supported LBA-Change 00:14:22.041 Read (02h): Supported 00:14:22.041 Compare (05h): Supported 00:14:22.041 Write Zeroes (08h): Supported LBA-Change 00:14:22.041 Dataset Management (09h): Supported LBA-Change 00:14:22.041 Unknown (0Ch): Supported 00:14:22.041 Unknown (12h): Supported 00:14:22.041 Copy (19h): Supported LBA-Change 00:14:22.041 Unknown (1Dh): Supported LBA-Change 00:14:22.041 00:14:22.041 Error Log 00:14:22.041 ========= 00:14:22.041 00:14:22.041 Arbitration 00:14:22.041 =========== 00:14:22.041 Arbitration Burst: no limit 00:14:22.041 00:14:22.041 Power Management 00:14:22.041 ================ 00:14:22.041 Number of Power States: 1 00:14:22.041 Current Power State: Power State #0 00:14:22.041 Power State #0: 00:14:22.041 Max Power: 25.00 W 00:14:22.041 Non-Operational State: Operational 00:14:22.041 Entry Latency: 16 microseconds 00:14:22.041 Exit Latency: 4 microseconds 00:14:22.041 Relative Read Throughput: 0 00:14:22.041 Relative Read Latency: 0 00:14:22.041 Relative Write Throughput: 0 00:14:22.041 Relative Write Latency: 0 00:14:22.042 Idle Power: Not Reported 00:14:22.042 Active Power: Not Reported 00:14:22.042 Non-Operational Permissive Mode: Not Supported 00:14:22.042 00:14:22.042 Health Information 00:14:22.042 ================== 00:14:22.042 Critical Warnings: 00:14:22.042 Available Spare Space: OK 00:14:22.042 Temperature: OK 00:14:22.042 Device Reliability: OK 00:14:22.042 Read Only: No 00:14:22.042 Volatile Memory Backup: OK 00:14:22.042 Current Temperature: 323 Kelvin (50 Celsius) 00:14:22.042 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:22.042 Available Spare: 0% 00:14:22.042 Available Spare Threshold: 0% 00:14:22.042 Life Percentage Used: 0% 00:14:22.042 Data Units Read: 811 00:14:22.042 Data Units Written: 740 00:14:22.042 Host Read Commands: 33471 00:14:22.042 Host Write Commands: 32894 00:14:22.042 Controller Busy Time: 0 minutes 00:14:22.042 Power Cycles: 0 00:14:22.042 Power On Hours: 0 hours 00:14:22.042 Unsafe Shutdowns: 0 00:14:22.042 Unrecoverable Media Errors: 0 00:14:22.042 Lifetime Error Log Entries: 0 00:14:22.042 Warning Temperature Time: 0 minutes 00:14:22.042 Critical Temperature Time: 0 minutes 00:14:22.042 00:14:22.042 Number of Queues 00:14:22.042 ================ 00:14:22.042 Number of I/O Submission Queues: 64 00:14:22.042 Number of I/O Completion Queues: 64 00:14:22.042 00:14:22.042 ZNS Specific Controller Data 00:14:22.042 ============================ 00:14:22.042 Zone Append Size Limit: 0 00:14:22.042 00:14:22.042 00:14:22.042 Active Namespaces 00:14:22.042 ================= 00:14:22.042 Namespace ID:1 00:14:22.042 Error Recovery Timeout: Unlimited 00:14:22.042 Command Set Identifier: NVM (00h) 00:14:22.042 Deallocate: Supported 00:14:22.042 Deallocated/Unwritten Error: Supported 00:14:22.042 Deallocated Read Value: All 0x00 00:14:22.042 Deallocate in Write Zeroes: Not Supported 00:14:22.042 Deallocated Guard Field: 0xFFFF 00:14:22.042 Flush: Supported 00:14:22.042 Reservation: Not Supported 00:14:22.042 Namespace Sharing Capabilities: Multiple Controllers 00:14:22.042 Size (in LBAs): 262144 (1GiB) 00:14:22.042 Capacity (in LBAs): 262144 (1GiB) 00:14:22.042 Utilization (in LBAs): 262144 (1GiB) 00:14:22.042 Thin Provisioning: Not Supported 00:14:22.042 Per-NS Atomic Units: No 00:14:22.042 Maximum Single Source Range Length: 128 00:14:22.042 Maximum Copy Length: 128 00:14:22.042 Maximum Source Range Count: 128 00:14:22.042 NGUID/EUI64 Never Reused: No 00:14:22.042 Namespace Write Protected: No 00:14:22.042 Endurance group ID: 1 00:14:22.042 Number of LBA Formats: 8 00:14:22.042 Current LBA Format: LBA Format #04 00:14:22.042 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:22.042 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:22.042 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:22.042 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:22.042 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:22.042 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:22.042 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:22.042 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:22.042 00:14:22.042 Get Feature FDP: 00:14:22.042 ================ 00:14:22.042 Enabled: Yes 00:14:22.042 FDP configuration index: 0 00:14:22.042 00:14:22.042 FDP configurations log page 00:14:22.042 =========================== 00:14:22.042 Number of FDP configurations: 1 00:14:22.042 Version: 0 00:14:22.042 Size: 112 00:14:22.042 FDP Configuration Descriptor: 0 00:14:22.042 Descriptor Size: 96 00:14:22.042 Reclaim Group Identifier format: 2 00:14:22.042 FDP Volatile Write Cache: Not Present 00:14:22.042 FDP Configuration: Valid 00:14:22.042 Vendor Specific Size: 0 00:14:22.042 Number of Reclaim Groups: 2 00:14:22.042 Number of Recalim Unit Handles: 8 00:14:22.042 Max Placement Identifiers: 128 00:14:22.042 Number of Namespaces Suppprted: 256 00:14:22.042 Reclaim unit Nominal Size: 6000000 bytes 00:14:22.042 Estimated Reclaim Unit Time Limit: Not Reported 00:14:22.042 RUH Desc #000: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #001: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #002: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #003: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #004: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #005: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #006: RUH Type: Initially Isolated 00:14:22.042 RUH Desc #007: RUH Type: Initially Isolated 00:14:22.042 00:14:22.042 FDP reclaim unit handle usage log page 00:14:22.042 ====================================== 00:14:22.042 Number of Reclaim Unit Handles: 8 00:14:22.042 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:22.042 RUH Usage Desc #001: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #002: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #003: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #004: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #005: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #006: RUH Attributes: Unused 00:14:22.042 RUH Usage Desc #007: RUH Attributes: Unused 00:14:22.042 00:14:22.042 FDP statistics log page 00:14:22.042 ======================= 00:14:22.042 Host bytes with metadata written: 464494592 00:14:22.042 Media bytes with metadata written: 464547840 00:14:22.042 Media bytes erased: 0 00:14:22.042 00:14:22.042 FDP events log page 00:14:22.042 =================== 00:14:22.042 Number of FDP events: 0 00:14:22.042 00:14:22.042 NVM Specific Namespace Data 00:14:22.042 =========================== 00:14:22.042 Logical Block Storage Tag Mask: 0 00:14:22.042 Protection Information Capabilities: 00:14:22.042 16b Guard Protection Information Storage Tag Support: No 00:14:22.042 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:22.042 Storage Tag Check Read Support: No 00:14:22.042 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:22.042 ************************************ 00:14:22.042 END TEST nvme_identify 00:14:22.042 ************************************ 00:14:22.042 00:14:22.042 real 0m2.028s 00:14:22.042 user 0m0.740s 00:14:22.042 sys 0m1.050s 00:14:22.042 15:26:07 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.042 15:26:07 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:14:22.042 15:26:07 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:14:22.042 15:26:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:22.042 15:26:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.042 15:26:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.042 ************************************ 00:14:22.042 START TEST nvme_perf 00:14:22.042 ************************************ 00:14:22.042 15:26:07 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:14:22.042 15:26:07 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:14:23.430 Initializing NVMe Controllers 00:14:23.430 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:23.430 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:23.430 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:23.430 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:23.430 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:23.430 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:23.430 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:23.430 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:23.430 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:23.430 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:23.430 Initialization complete. Launching workers. 00:14:23.430 ======================================================== 00:14:23.430 Latency(us) 00:14:23.430 Device Information : IOPS MiB/s Average min max 00:14:23.430 PCIE (0000:00:10.0) NSID 1 from core 0: 10334.53 121.11 12410.29 8779.97 59684.48 00:14:23.430 PCIE (0000:00:11.0) NSID 1 from core 0: 10334.53 121.11 12372.87 8902.08 56963.84 00:14:23.430 PCIE (0000:00:13.0) NSID 1 from core 0: 10334.53 121.11 12329.92 8886.57 55009.19 00:14:23.430 PCIE (0000:00:12.0) NSID 1 from core 0: 10334.53 121.11 12292.51 8871.46 51604.92 00:14:23.430 PCIE (0000:00:12.0) NSID 2 from core 0: 10334.53 121.11 12254.31 8896.88 48388.30 00:14:23.430 PCIE (0000:00:12.0) NSID 3 from core 0: 10398.32 121.86 12140.72 8867.74 36564.73 00:14:23.430 ======================================================== 00:14:23.430 Total : 62070.95 727.39 12299.94 8779.97 59684.48 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9112.625us 00:14:23.430 10.00000% : 10236.099us 00:14:23.430 25.00000% : 10860.251us 00:14:23.430 50.00000% : 11546.819us 00:14:23.430 75.00000% : 12670.293us 00:14:23.430 90.00000% : 14979.657us 00:14:23.430 95.00000% : 16352.792us 00:14:23.430 98.00000% : 17601.097us 00:14:23.430 99.00000% : 45438.293us 00:14:23.430 99.50000% : 57671.680us 00:14:23.430 99.90000% : 59419.307us 00:14:23.430 99.99000% : 59668.968us 00:14:23.430 99.99900% : 59918.629us 00:14:23.430 99.99990% : 59918.629us 00:14:23.430 99.99999% : 59918.629us 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9237.455us 00:14:23.430 10.00000% : 10298.514us 00:14:23.430 25.00000% : 10860.251us 00:14:23.430 50.00000% : 11484.404us 00:14:23.430 75.00000% : 12607.878us 00:14:23.430 90.00000% : 15042.072us 00:14:23.430 95.00000% : 16352.792us 00:14:23.430 98.00000% : 17476.267us 00:14:23.430 99.00000% : 41943.040us 00:14:23.430 99.50000% : 54925.410us 00:14:23.430 99.90000% : 56673.036us 00:14:23.430 99.99000% : 56922.697us 00:14:23.430 99.99900% : 57172.358us 00:14:23.430 99.99990% : 57172.358us 00:14:23.430 99.99999% : 57172.358us 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9237.455us 00:14:23.430 10.00000% : 10236.099us 00:14:23.430 25.00000% : 10860.251us 00:14:23.430 50.00000% : 11484.404us 00:14:23.430 75.00000% : 12545.463us 00:14:23.430 90.00000% : 15042.072us 00:14:23.430 95.00000% : 16227.962us 00:14:23.430 98.00000% : 17101.775us 00:14:23.430 99.00000% : 40944.396us 00:14:23.430 99.50000% : 52928.122us 00:14:23.430 99.90000% : 54675.749us 00:14:23.430 99.99000% : 55175.070us 00:14:23.430 99.99900% : 55175.070us 00:14:23.430 99.99990% : 55175.070us 00:14:23.430 99.99999% : 55175.070us 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9237.455us 00:14:23.430 10.00000% : 10298.514us 00:14:23.430 25.00000% : 10922.667us 00:14:23.430 50.00000% : 11546.819us 00:14:23.430 75.00000% : 12607.878us 00:14:23.430 90.00000% : 14917.242us 00:14:23.430 95.00000% : 15853.470us 00:14:23.430 98.00000% : 16976.945us 00:14:23.430 99.00000% : 38447.787us 00:14:23.430 99.50000% : 49682.530us 00:14:23.430 99.90000% : 51180.495us 00:14:23.430 99.99000% : 51679.817us 00:14:23.430 99.99900% : 51679.817us 00:14:23.430 99.99990% : 51679.817us 00:14:23.430 99.99999% : 51679.817us 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9237.455us 00:14:23.430 10.00000% : 10236.099us 00:14:23.430 25.00000% : 10922.667us 00:14:23.430 50.00000% : 11484.404us 00:14:23.430 75.00000% : 12607.878us 00:14:23.430 90.00000% : 14792.411us 00:14:23.430 95.00000% : 15915.886us 00:14:23.430 98.00000% : 16976.945us 00:14:23.430 99.00000% : 35701.516us 00:14:23.430 99.50000% : 46436.937us 00:14:23.430 99.90000% : 48184.564us 00:14:23.430 99.99000% : 48434.225us 00:14:23.430 99.99900% : 48434.225us 00:14:23.430 99.99990% : 48434.225us 00:14:23.430 99.99999% : 48434.225us 00:14:23.430 00:14:23.430 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:23.430 ================================================================================= 00:14:23.430 1.00000% : 9175.040us 00:14:23.430 10.00000% : 10236.099us 00:14:23.430 25.00000% : 10922.667us 00:14:23.430 50.00000% : 11546.819us 00:14:23.430 75.00000% : 12607.878us 00:14:23.430 90.00000% : 14854.827us 00:14:23.430 95.00000% : 16352.792us 00:14:23.430 98.00000% : 17725.928us 00:14:23.430 99.00000% : 22843.977us 00:14:23.430 99.50000% : 34453.211us 00:14:23.430 99.90000% : 36200.838us 00:14:23.430 99.99000% : 36700.160us 00:14:23.430 99.99900% : 36700.160us 00:14:23.430 99.99990% : 36700.160us 00:14:23.430 99.99999% : 36700.160us 00:14:23.431 00:14:23.431 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:23.431 ============================================================================== 00:14:23.431 Range in us Cumulative IO count 00:14:23.431 8738.133 - 8800.549: 0.0386% ( 4) 00:14:23.431 8800.549 - 8862.964: 0.1447% ( 11) 00:14:23.431 8862.964 - 8925.379: 0.3086% ( 17) 00:14:23.431 8925.379 - 8987.794: 0.4726% ( 17) 00:14:23.431 8987.794 - 9050.210: 0.7427% ( 28) 00:14:23.431 9050.210 - 9112.625: 1.0899% ( 36) 00:14:23.431 9112.625 - 9175.040: 1.4275% ( 35) 00:14:23.431 9175.040 - 9237.455: 1.8133% ( 40) 00:14:23.431 9237.455 - 9299.870: 2.2859% ( 49) 00:14:23.431 9299.870 - 9362.286: 2.7681% ( 50) 00:14:23.431 9362.286 - 9424.701: 3.2793% ( 53) 00:14:23.431 9424.701 - 9487.116: 3.7905% ( 53) 00:14:23.431 9487.116 - 9549.531: 4.3210% ( 55) 00:14:23.431 9549.531 - 9611.947: 4.8322% ( 53) 00:14:23.431 9611.947 - 9674.362: 5.3337% ( 52) 00:14:23.431 9674.362 - 9736.777: 5.8160% ( 50) 00:14:23.431 9736.777 - 9799.192: 6.3561% ( 56) 00:14:23.431 9799.192 - 9861.608: 6.9348% ( 60) 00:14:23.431 9861.608 - 9924.023: 7.5135% ( 60) 00:14:23.431 9924.023 - 9986.438: 8.0536% ( 56) 00:14:23.431 9986.438 - 10048.853: 8.5359% ( 50) 00:14:23.431 10048.853 - 10111.269: 9.1628% ( 65) 00:14:23.431 10111.269 - 10173.684: 9.6644% ( 52) 00:14:23.431 10173.684 - 10236.099: 10.2238% ( 58) 00:14:23.431 10236.099 - 10298.514: 10.9375% ( 74) 00:14:23.431 10298.514 - 10360.930: 11.9213% ( 102) 00:14:23.431 10360.930 - 10423.345: 13.3198% ( 145) 00:14:23.431 10423.345 - 10485.760: 14.7569% ( 149) 00:14:23.431 10485.760 - 10548.175: 16.4352% ( 174) 00:14:23.431 10548.175 - 10610.590: 18.1713% ( 180) 00:14:23.431 10610.590 - 10673.006: 20.0617% ( 196) 00:14:23.431 10673.006 - 10735.421: 22.2319% ( 225) 00:14:23.431 10735.421 - 10797.836: 24.1705% ( 201) 00:14:23.431 10797.836 - 10860.251: 26.1767% ( 208) 00:14:23.431 10860.251 - 10922.667: 28.3468% ( 225) 00:14:23.431 10922.667 - 10985.082: 30.3530% ( 208) 00:14:23.431 10985.082 - 11047.497: 32.4363% ( 216) 00:14:23.431 11047.497 - 11109.912: 34.6933% ( 234) 00:14:23.431 11109.912 - 11172.328: 36.9695% ( 236) 00:14:23.431 11172.328 - 11234.743: 39.3133% ( 243) 00:14:23.431 11234.743 - 11297.158: 41.6570% ( 243) 00:14:23.431 11297.158 - 11359.573: 44.1647% ( 260) 00:14:23.431 11359.573 - 11421.989: 46.4988% ( 242) 00:14:23.431 11421.989 - 11484.404: 48.8040% ( 239) 00:14:23.431 11484.404 - 11546.819: 50.8873% ( 216) 00:14:23.431 11546.819 - 11609.234: 53.1539% ( 235) 00:14:23.431 11609.234 - 11671.650: 55.3434% ( 227) 00:14:23.431 11671.650 - 11734.065: 57.2627% ( 199) 00:14:23.431 11734.065 - 11796.480: 58.9988% ( 180) 00:14:23.431 11796.480 - 11858.895: 60.6385% ( 170) 00:14:23.431 11858.895 - 11921.310: 61.9695% ( 138) 00:14:23.431 11921.310 - 11983.726: 63.4163% ( 150) 00:14:23.431 11983.726 - 12046.141: 64.6798% ( 131) 00:14:23.431 12046.141 - 12108.556: 65.8083% ( 117) 00:14:23.431 12108.556 - 12170.971: 66.8596% ( 109) 00:14:23.431 12170.971 - 12233.387: 67.9880% ( 117) 00:14:23.431 12233.387 - 12295.802: 69.1069% ( 116) 00:14:23.431 12295.802 - 12358.217: 70.3897% ( 133) 00:14:23.431 12358.217 - 12420.632: 71.5953% ( 125) 00:14:23.431 12420.632 - 12483.048: 72.7334% ( 118) 00:14:23.431 12483.048 - 12545.463: 73.8715% ( 118) 00:14:23.431 12545.463 - 12607.878: 74.9518% ( 112) 00:14:23.431 12607.878 - 12670.293: 76.0224% ( 111) 00:14:23.431 12670.293 - 12732.709: 77.0158% ( 103) 00:14:23.431 12732.709 - 12795.124: 77.9321% ( 95) 00:14:23.431 12795.124 - 12857.539: 78.8002% ( 90) 00:14:23.431 12857.539 - 12919.954: 79.6489% ( 88) 00:14:23.431 12919.954 - 12982.370: 80.4784% ( 86) 00:14:23.431 12982.370 - 13044.785: 81.2789% ( 83) 00:14:23.431 13044.785 - 13107.200: 82.0698% ( 82) 00:14:23.431 13107.200 - 13169.615: 82.6968% ( 65) 00:14:23.431 13169.615 - 13232.030: 83.2755% ( 60) 00:14:23.431 13232.030 - 13294.446: 83.7770% ( 52) 00:14:23.431 13294.446 - 13356.861: 84.2400% ( 48) 00:14:23.431 13356.861 - 13419.276: 84.6740% ( 45) 00:14:23.431 13419.276 - 13481.691: 85.0598% ( 40) 00:14:23.431 13481.691 - 13544.107: 85.4167% ( 37) 00:14:23.431 13544.107 - 13606.522: 85.7446% ( 34) 00:14:23.431 13606.522 - 13668.937: 86.0050% ( 27) 00:14:23.431 13668.937 - 13731.352: 86.2751% ( 28) 00:14:23.431 13731.352 - 13793.768: 86.5162% ( 25) 00:14:23.431 13793.768 - 13856.183: 86.6898% ( 18) 00:14:23.431 13856.183 - 13918.598: 86.8345% ( 15) 00:14:23.431 13918.598 - 13981.013: 86.9695% ( 14) 00:14:23.431 13981.013 - 14043.429: 87.0949% ( 13) 00:14:23.431 14043.429 - 14105.844: 87.2685% ( 18) 00:14:23.431 14105.844 - 14168.259: 87.4228% ( 16) 00:14:23.431 14168.259 - 14230.674: 87.6254% ( 21) 00:14:23.431 14230.674 - 14293.090: 87.8183% ( 20) 00:14:23.431 14293.090 - 14355.505: 88.0208% ( 21) 00:14:23.431 14355.505 - 14417.920: 88.1944% ( 18) 00:14:23.431 14417.920 - 14480.335: 88.4259% ( 24) 00:14:23.431 14480.335 - 14542.750: 88.6092% ( 19) 00:14:23.431 14542.750 - 14605.166: 88.7635% ( 16) 00:14:23.431 14605.166 - 14667.581: 88.9468% ( 19) 00:14:23.431 14667.581 - 14729.996: 89.1590% ( 22) 00:14:23.431 14729.996 - 14792.411: 89.4001% ( 25) 00:14:23.431 14792.411 - 14854.827: 89.5930% ( 20) 00:14:23.431 14854.827 - 14917.242: 89.9113% ( 33) 00:14:23.431 14917.242 - 14979.657: 90.1524% ( 25) 00:14:23.431 14979.657 - 15042.072: 90.3742% ( 23) 00:14:23.431 15042.072 - 15104.488: 90.6539% ( 29) 00:14:23.431 15104.488 - 15166.903: 90.9240% ( 28) 00:14:23.431 15166.903 - 15229.318: 91.1748% ( 26) 00:14:23.431 15229.318 - 15291.733: 91.4352% ( 27) 00:14:23.431 15291.733 - 15354.149: 91.7052% ( 28) 00:14:23.431 15354.149 - 15416.564: 91.9367% ( 24) 00:14:23.431 15416.564 - 15478.979: 92.1875% ( 26) 00:14:23.431 15478.979 - 15541.394: 92.4286% ( 25) 00:14:23.431 15541.394 - 15603.810: 92.6408% ( 22) 00:14:23.431 15603.810 - 15666.225: 92.8627% ( 23) 00:14:23.431 15666.225 - 15728.640: 93.0459% ( 19) 00:14:23.431 15728.640 - 15791.055: 93.2388% ( 20) 00:14:23.431 15791.055 - 15853.470: 93.4799% ( 25) 00:14:23.431 15853.470 - 15915.886: 93.7307% ( 26) 00:14:23.431 15915.886 - 15978.301: 94.0008% ( 28) 00:14:23.431 15978.301 - 16103.131: 94.4927% ( 51) 00:14:23.431 16103.131 - 16227.962: 94.9749% ( 50) 00:14:23.431 16227.962 - 16352.792: 95.3511% ( 39) 00:14:23.431 16352.792 - 16477.623: 95.7562% ( 42) 00:14:23.431 16477.623 - 16602.453: 96.0938% ( 35) 00:14:23.431 16602.453 - 16727.284: 96.4120% ( 33) 00:14:23.431 16727.284 - 16852.114: 96.7207% ( 32) 00:14:23.431 16852.114 - 16976.945: 97.0100% ( 30) 00:14:23.431 16976.945 - 17101.775: 97.3283% ( 33) 00:14:23.431 17101.775 - 17226.606: 97.6273% ( 31) 00:14:23.431 17226.606 - 17351.436: 97.8106% ( 19) 00:14:23.431 17351.436 - 17476.267: 97.9263% ( 12) 00:14:23.431 17476.267 - 17601.097: 98.0228% ( 10) 00:14:23.431 17601.097 - 17725.928: 98.1096% ( 9) 00:14:23.431 17725.928 - 17850.758: 98.1674% ( 6) 00:14:23.431 17850.758 - 17975.589: 98.2253% ( 6) 00:14:23.431 17975.589 - 18100.419: 98.2832% ( 6) 00:14:23.431 18100.419 - 18225.250: 98.3218% ( 4) 00:14:23.431 18225.250 - 18350.080: 98.3796% ( 6) 00:14:23.431 18350.080 - 18474.910: 98.4279% ( 5) 00:14:23.431 18474.910 - 18599.741: 98.4761% ( 5) 00:14:23.431 18599.741 - 18724.571: 98.5243% ( 5) 00:14:23.431 18724.571 - 18849.402: 98.5918% ( 7) 00:14:23.431 18849.402 - 18974.232: 98.6304% ( 4) 00:14:23.431 18974.232 - 19099.063: 98.6786% ( 5) 00:14:23.431 19099.063 - 19223.893: 98.7269% ( 5) 00:14:23.431 19223.893 - 19348.724: 98.7654% ( 4) 00:14:23.431 43940.328 - 44189.989: 98.7751% ( 1) 00:14:23.431 44189.989 - 44439.650: 98.8233% ( 5) 00:14:23.431 44439.650 - 44689.310: 98.8715% ( 5) 00:14:23.431 44689.310 - 44938.971: 98.9198% ( 5) 00:14:23.431 44938.971 - 45188.632: 98.9873% ( 7) 00:14:23.431 45188.632 - 45438.293: 99.0355% ( 5) 00:14:23.431 45438.293 - 45687.954: 99.0837% ( 5) 00:14:23.431 45687.954 - 45937.615: 99.1416% ( 6) 00:14:23.431 45937.615 - 46187.276: 99.1995% ( 6) 00:14:23.431 46187.276 - 46436.937: 99.2380% ( 4) 00:14:23.431 46436.937 - 46686.598: 99.2863% ( 5) 00:14:23.431 46686.598 - 46936.259: 99.3441% ( 6) 00:14:23.431 46936.259 - 47185.920: 99.3827% ( 4) 00:14:23.431 56922.697 - 57172.358: 99.4406% ( 6) 00:14:23.431 57172.358 - 57422.019: 99.4985% ( 6) 00:14:23.431 57422.019 - 57671.680: 99.5467% ( 5) 00:14:23.431 57671.680 - 57921.341: 99.6046% ( 6) 00:14:23.431 57921.341 - 58171.002: 99.6528% ( 5) 00:14:23.431 58171.002 - 58420.663: 99.7106% ( 6) 00:14:23.431 58420.663 - 58670.324: 99.7685% ( 6) 00:14:23.431 58670.324 - 58919.985: 99.8264% ( 6) 00:14:23.431 58919.985 - 59169.646: 99.8843% ( 6) 00:14:23.431 59169.646 - 59419.307: 99.9421% ( 6) 00:14:23.431 59419.307 - 59668.968: 99.9904% ( 5) 00:14:23.431 59668.968 - 59918.629: 100.0000% ( 1) 00:14:23.431 00:14:23.431 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:23.431 ============================================================================== 00:14:23.431 Range in us Cumulative IO count 00:14:23.431 8862.964 - 8925.379: 0.0579% ( 6) 00:14:23.431 8925.379 - 8987.794: 0.1350% ( 8) 00:14:23.431 8987.794 - 9050.210: 0.3472% ( 22) 00:14:23.431 9050.210 - 9112.625: 0.5691% ( 23) 00:14:23.431 9112.625 - 9175.040: 0.8391% ( 28) 00:14:23.431 9175.040 - 9237.455: 1.2153% ( 39) 00:14:23.431 9237.455 - 9299.870: 1.6590% ( 46) 00:14:23.431 9299.870 - 9362.286: 2.1605% ( 52) 00:14:23.431 9362.286 - 9424.701: 2.6717% ( 53) 00:14:23.431 9424.701 - 9487.116: 3.2600% ( 61) 00:14:23.431 9487.116 - 9549.531: 3.8387% ( 60) 00:14:23.431 9549.531 - 9611.947: 4.4078% ( 59) 00:14:23.431 9611.947 - 9674.362: 5.0637% ( 68) 00:14:23.431 9674.362 - 9736.777: 5.6617% ( 62) 00:14:23.431 9736.777 - 9799.192: 6.2404% ( 60) 00:14:23.431 9799.192 - 9861.608: 6.9541% ( 74) 00:14:23.431 9861.608 - 9924.023: 7.6003% ( 67) 00:14:23.431 9924.023 - 9986.438: 8.1308% ( 55) 00:14:23.431 9986.438 - 10048.853: 8.6130% ( 50) 00:14:23.431 10048.853 - 10111.269: 9.1049% ( 51) 00:14:23.432 10111.269 - 10173.684: 9.5293% ( 44) 00:14:23.432 10173.684 - 10236.099: 9.9344% ( 42) 00:14:23.432 10236.099 - 10298.514: 10.3974% ( 48) 00:14:23.432 10298.514 - 10360.930: 10.9761% ( 60) 00:14:23.432 10360.930 - 10423.345: 11.7188% ( 77) 00:14:23.432 10423.345 - 10485.760: 12.8183% ( 114) 00:14:23.432 10485.760 - 10548.175: 14.4001% ( 164) 00:14:23.432 10548.175 - 10610.590: 16.1748% ( 184) 00:14:23.432 10610.590 - 10673.006: 18.2581% ( 216) 00:14:23.432 10673.006 - 10735.421: 20.3607% ( 218) 00:14:23.432 10735.421 - 10797.836: 22.7334% ( 246) 00:14:23.432 10797.836 - 10860.251: 25.0675% ( 242) 00:14:23.432 10860.251 - 10922.667: 27.4209% ( 244) 00:14:23.432 10922.667 - 10985.082: 29.9576% ( 263) 00:14:23.432 10985.082 - 11047.497: 32.4074% ( 254) 00:14:23.432 11047.497 - 11109.912: 34.9151% ( 260) 00:14:23.432 11109.912 - 11172.328: 37.3843% ( 256) 00:14:23.432 11172.328 - 11234.743: 40.0077% ( 272) 00:14:23.432 11234.743 - 11297.158: 42.6022% ( 269) 00:14:23.432 11297.158 - 11359.573: 45.1389% ( 263) 00:14:23.432 11359.573 - 11421.989: 47.6948% ( 265) 00:14:23.432 11421.989 - 11484.404: 50.0000% ( 239) 00:14:23.432 11484.404 - 11546.819: 52.4113% ( 250) 00:14:23.432 11546.819 - 11609.234: 54.4753% ( 214) 00:14:23.432 11609.234 - 11671.650: 56.3561% ( 195) 00:14:23.432 11671.650 - 11734.065: 57.8897% ( 159) 00:14:23.432 11734.065 - 11796.480: 59.4329% ( 160) 00:14:23.432 11796.480 - 11858.895: 60.8025% ( 142) 00:14:23.432 11858.895 - 11921.310: 62.3457% ( 160) 00:14:23.432 11921.310 - 11983.726: 63.5610% ( 126) 00:14:23.432 11983.726 - 12046.141: 64.7666% ( 125) 00:14:23.432 12046.141 - 12108.556: 65.8758% ( 115) 00:14:23.432 12108.556 - 12170.971: 67.0910% ( 126) 00:14:23.432 12170.971 - 12233.387: 68.3449% ( 130) 00:14:23.432 12233.387 - 12295.802: 69.6566% ( 136) 00:14:23.432 12295.802 - 12358.217: 70.8623% ( 125) 00:14:23.432 12358.217 - 12420.632: 72.0679% ( 125) 00:14:23.432 12420.632 - 12483.048: 73.2446% ( 122) 00:14:23.432 12483.048 - 12545.463: 74.3731% ( 117) 00:14:23.432 12545.463 - 12607.878: 75.5401% ( 121) 00:14:23.432 12607.878 - 12670.293: 76.6300% ( 113) 00:14:23.432 12670.293 - 12732.709: 77.6524% ( 106) 00:14:23.432 12732.709 - 12795.124: 78.6748% ( 106) 00:14:23.432 12795.124 - 12857.539: 79.6007% ( 96) 00:14:23.432 12857.539 - 12919.954: 80.4302% ( 86) 00:14:23.432 12919.954 - 12982.370: 81.2596% ( 86) 00:14:23.432 12982.370 - 13044.785: 81.9348% ( 70) 00:14:23.432 13044.785 - 13107.200: 82.5521% ( 64) 00:14:23.432 13107.200 - 13169.615: 83.0440% ( 51) 00:14:23.432 13169.615 - 13232.030: 83.5841% ( 56) 00:14:23.432 13232.030 - 13294.446: 84.0374% ( 47) 00:14:23.432 13294.446 - 13356.861: 84.4618% ( 44) 00:14:23.432 13356.861 - 13419.276: 84.8476% ( 40) 00:14:23.432 13419.276 - 13481.691: 85.1755% ( 34) 00:14:23.432 13481.691 - 13544.107: 85.4456% ( 28) 00:14:23.432 13544.107 - 13606.522: 85.6964% ( 26) 00:14:23.432 13606.522 - 13668.937: 85.9086% ( 22) 00:14:23.432 13668.937 - 13731.352: 86.1015% ( 20) 00:14:23.432 13731.352 - 13793.768: 86.2365% ( 14) 00:14:23.432 13793.768 - 13856.183: 86.3715% ( 14) 00:14:23.432 13856.183 - 13918.598: 86.5162% ( 15) 00:14:23.432 13918.598 - 13981.013: 86.6609% ( 15) 00:14:23.432 13981.013 - 14043.429: 86.7959% ( 14) 00:14:23.432 14043.429 - 14105.844: 86.9695% ( 18) 00:14:23.432 14105.844 - 14168.259: 87.1335% ( 17) 00:14:23.432 14168.259 - 14230.674: 87.3071% ( 18) 00:14:23.432 14230.674 - 14293.090: 87.4904% ( 19) 00:14:23.432 14293.090 - 14355.505: 87.6640% ( 18) 00:14:23.432 14355.505 - 14417.920: 87.8665% ( 21) 00:14:23.432 14417.920 - 14480.335: 88.0594% ( 20) 00:14:23.432 14480.335 - 14542.750: 88.2330% ( 18) 00:14:23.432 14542.750 - 14605.166: 88.4838% ( 26) 00:14:23.432 14605.166 - 14667.581: 88.7249% ( 25) 00:14:23.432 14667.581 - 14729.996: 88.9660% ( 25) 00:14:23.432 14729.996 - 14792.411: 89.1975% ( 24) 00:14:23.432 14792.411 - 14854.827: 89.4387% ( 25) 00:14:23.432 14854.827 - 14917.242: 89.7280% ( 30) 00:14:23.432 14917.242 - 14979.657: 89.9981% ( 28) 00:14:23.432 14979.657 - 15042.072: 90.2585% ( 27) 00:14:23.432 15042.072 - 15104.488: 90.5575% ( 31) 00:14:23.432 15104.488 - 15166.903: 90.8758% ( 33) 00:14:23.432 15166.903 - 15229.318: 91.1748% ( 31) 00:14:23.432 15229.318 - 15291.733: 91.4448% ( 28) 00:14:23.432 15291.733 - 15354.149: 91.6763% ( 24) 00:14:23.432 15354.149 - 15416.564: 91.9174% ( 25) 00:14:23.432 15416.564 - 15478.979: 92.1971% ( 29) 00:14:23.432 15478.979 - 15541.394: 92.4672% ( 28) 00:14:23.432 15541.394 - 15603.810: 92.6794% ( 22) 00:14:23.432 15603.810 - 15666.225: 92.8627% ( 19) 00:14:23.432 15666.225 - 15728.640: 93.0652% ( 21) 00:14:23.432 15728.640 - 15791.055: 93.2581% ( 20) 00:14:23.432 15791.055 - 15853.470: 93.4703% ( 22) 00:14:23.432 15853.470 - 15915.886: 93.7018% ( 24) 00:14:23.432 15915.886 - 15978.301: 93.9525% ( 26) 00:14:23.432 15978.301 - 16103.131: 94.4734% ( 54) 00:14:23.432 16103.131 - 16227.962: 94.9556% ( 50) 00:14:23.432 16227.962 - 16352.792: 95.3800% ( 44) 00:14:23.432 16352.792 - 16477.623: 95.7948% ( 43) 00:14:23.432 16477.623 - 16602.453: 96.2288% ( 45) 00:14:23.432 16602.453 - 16727.284: 96.6435% ( 43) 00:14:23.432 16727.284 - 16852.114: 97.0390% ( 41) 00:14:23.432 16852.114 - 16976.945: 97.4151% ( 39) 00:14:23.432 16976.945 - 17101.775: 97.7141% ( 31) 00:14:23.432 17101.775 - 17226.606: 97.8492% ( 14) 00:14:23.432 17226.606 - 17351.436: 97.9552% ( 11) 00:14:23.432 17351.436 - 17476.267: 98.0324% ( 8) 00:14:23.432 17476.267 - 17601.097: 98.0999% ( 7) 00:14:23.432 17601.097 - 17725.928: 98.1674% ( 7) 00:14:23.432 17725.928 - 17850.758: 98.2350% ( 7) 00:14:23.432 17850.758 - 17975.589: 98.2928% ( 6) 00:14:23.432 17975.589 - 18100.419: 98.3603% ( 7) 00:14:23.432 18100.419 - 18225.250: 98.4182% ( 6) 00:14:23.432 18225.250 - 18350.080: 98.4761% ( 6) 00:14:23.432 18350.080 - 18474.910: 98.5340% ( 6) 00:14:23.432 18474.910 - 18599.741: 98.6015% ( 7) 00:14:23.432 18599.741 - 18724.571: 98.6593% ( 6) 00:14:23.432 18724.571 - 18849.402: 98.7269% ( 7) 00:14:23.432 18849.402 - 18974.232: 98.7654% ( 4) 00:14:23.432 40445.074 - 40694.735: 98.7847% ( 2) 00:14:23.432 40694.735 - 40944.396: 98.8329% ( 5) 00:14:23.432 40944.396 - 41194.057: 98.8812% ( 5) 00:14:23.432 41194.057 - 41443.718: 98.9294% ( 5) 00:14:23.432 41443.718 - 41693.379: 98.9873% ( 6) 00:14:23.432 41693.379 - 41943.040: 99.0355% ( 5) 00:14:23.432 41943.040 - 42192.701: 99.0837% ( 5) 00:14:23.432 42192.701 - 42442.362: 99.1416% ( 6) 00:14:23.432 42442.362 - 42692.023: 99.1802% ( 4) 00:14:23.432 42692.023 - 42941.684: 99.2380% ( 6) 00:14:23.432 42941.684 - 43191.345: 99.2863% ( 5) 00:14:23.432 43191.345 - 43441.006: 99.3345% ( 5) 00:14:23.432 43441.006 - 43690.667: 99.3827% ( 5) 00:14:23.432 54176.427 - 54426.088: 99.4309% ( 5) 00:14:23.432 54426.088 - 54675.749: 99.4792% ( 5) 00:14:23.432 54675.749 - 54925.410: 99.5370% ( 6) 00:14:23.432 54925.410 - 55175.070: 99.5853% ( 5) 00:14:23.432 55175.070 - 55424.731: 99.6528% ( 7) 00:14:23.432 55424.731 - 55674.392: 99.7010% ( 5) 00:14:23.432 55674.392 - 55924.053: 99.7589% ( 6) 00:14:23.432 55924.053 - 56173.714: 99.8167% ( 6) 00:14:23.432 56173.714 - 56423.375: 99.8746% ( 6) 00:14:23.432 56423.375 - 56673.036: 99.9325% ( 6) 00:14:23.432 56673.036 - 56922.697: 99.9904% ( 6) 00:14:23.432 56922.697 - 57172.358: 100.0000% ( 1) 00:14:23.432 00:14:23.432 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:23.432 ============================================================================== 00:14:23.432 Range in us Cumulative IO count 00:14:23.432 8862.964 - 8925.379: 0.0289% ( 3) 00:14:23.432 8925.379 - 8987.794: 0.0965% ( 7) 00:14:23.432 8987.794 - 9050.210: 0.2701% ( 18) 00:14:23.432 9050.210 - 9112.625: 0.4726% ( 21) 00:14:23.432 9112.625 - 9175.040: 0.8488% ( 39) 00:14:23.432 9175.040 - 9237.455: 1.2828% ( 45) 00:14:23.432 9237.455 - 9299.870: 1.7747% ( 51) 00:14:23.432 9299.870 - 9362.286: 2.3052% ( 55) 00:14:23.432 9362.286 - 9424.701: 2.8453% ( 56) 00:14:23.432 9424.701 - 9487.116: 3.4722% ( 65) 00:14:23.432 9487.116 - 9549.531: 4.0606% ( 61) 00:14:23.432 9549.531 - 9611.947: 4.7261% ( 69) 00:14:23.432 9611.947 - 9674.362: 5.4205% ( 72) 00:14:23.432 9674.362 - 9736.777: 6.0764% ( 68) 00:14:23.432 9736.777 - 9799.192: 6.7708% ( 72) 00:14:23.432 9799.192 - 9861.608: 7.4171% ( 67) 00:14:23.432 9861.608 - 9924.023: 8.0440% ( 65) 00:14:23.432 9924.023 - 9986.438: 8.6420% ( 62) 00:14:23.432 9986.438 - 10048.853: 9.1917% ( 57) 00:14:23.432 10048.853 - 10111.269: 9.5968% ( 42) 00:14:23.432 10111.269 - 10173.684: 9.9055% ( 32) 00:14:23.432 10173.684 - 10236.099: 10.2720% ( 38) 00:14:23.432 10236.099 - 10298.514: 10.8025% ( 55) 00:14:23.432 10298.514 - 10360.930: 11.4005% ( 62) 00:14:23.432 10360.930 - 10423.345: 12.1238% ( 75) 00:14:23.432 10423.345 - 10485.760: 13.1559% ( 107) 00:14:23.432 10485.760 - 10548.175: 14.5833% ( 148) 00:14:23.432 10548.175 - 10610.590: 16.2712% ( 175) 00:14:23.432 10610.590 - 10673.006: 18.2774% ( 208) 00:14:23.432 10673.006 - 10735.421: 20.4765% ( 228) 00:14:23.432 10735.421 - 10797.836: 22.8974% ( 251) 00:14:23.432 10797.836 - 10860.251: 25.2894% ( 248) 00:14:23.432 10860.251 - 10922.667: 27.7296% ( 253) 00:14:23.432 10922.667 - 10985.082: 30.2951% ( 266) 00:14:23.433 10985.082 - 11047.497: 32.7160% ( 251) 00:14:23.433 11047.497 - 11109.912: 35.1852% ( 256) 00:14:23.433 11109.912 - 11172.328: 37.7025% ( 261) 00:14:23.433 11172.328 - 11234.743: 40.3356% ( 273) 00:14:23.433 11234.743 - 11297.158: 42.9012% ( 266) 00:14:23.433 11297.158 - 11359.573: 45.4186% ( 261) 00:14:23.433 11359.573 - 11421.989: 47.8395% ( 251) 00:14:23.433 11421.989 - 11484.404: 50.1833% ( 243) 00:14:23.433 11484.404 - 11546.819: 52.3823% ( 228) 00:14:23.433 11546.819 - 11609.234: 54.5428% ( 224) 00:14:23.433 11609.234 - 11671.650: 56.5201% ( 205) 00:14:23.433 11671.650 - 11734.065: 58.1501% ( 169) 00:14:23.433 11734.065 - 11796.480: 59.7994% ( 171) 00:14:23.433 11796.480 - 11858.895: 61.3040% ( 156) 00:14:23.433 11858.895 - 11921.310: 62.7797% ( 153) 00:14:23.433 11921.310 - 11983.726: 64.0529% ( 132) 00:14:23.433 11983.726 - 12046.141: 65.2006% ( 119) 00:14:23.433 12046.141 - 12108.556: 66.3966% ( 124) 00:14:23.433 12108.556 - 12170.971: 67.6408% ( 129) 00:14:23.433 12170.971 - 12233.387: 68.9333% ( 134) 00:14:23.433 12233.387 - 12295.802: 70.2353% ( 135) 00:14:23.433 12295.802 - 12358.217: 71.5664% ( 138) 00:14:23.433 12358.217 - 12420.632: 72.7816% ( 126) 00:14:23.433 12420.632 - 12483.048: 73.9101% ( 117) 00:14:23.433 12483.048 - 12545.463: 75.0579% ( 119) 00:14:23.433 12545.463 - 12607.878: 76.1574% ( 114) 00:14:23.433 12607.878 - 12670.293: 77.2762% ( 116) 00:14:23.433 12670.293 - 12732.709: 78.2986% ( 106) 00:14:23.433 12732.709 - 12795.124: 79.2535% ( 99) 00:14:23.433 12795.124 - 12857.539: 80.1505% ( 93) 00:14:23.433 12857.539 - 12919.954: 80.9606% ( 84) 00:14:23.433 12919.954 - 12982.370: 81.6551% ( 72) 00:14:23.433 12982.370 - 13044.785: 82.3206% ( 69) 00:14:23.433 13044.785 - 13107.200: 82.9186% ( 62) 00:14:23.433 13107.200 - 13169.615: 83.4008% ( 50) 00:14:23.433 13169.615 - 13232.030: 83.8638% ( 48) 00:14:23.433 13232.030 - 13294.446: 84.2689% ( 42) 00:14:23.433 13294.446 - 13356.861: 84.5679% ( 31) 00:14:23.433 13356.861 - 13419.276: 84.8187% ( 26) 00:14:23.433 13419.276 - 13481.691: 85.0212% ( 21) 00:14:23.433 13481.691 - 13544.107: 85.2238% ( 21) 00:14:23.433 13544.107 - 13606.522: 85.4167% ( 20) 00:14:23.433 13606.522 - 13668.937: 85.6674% ( 26) 00:14:23.433 13668.937 - 13731.352: 85.8603% ( 20) 00:14:23.433 13731.352 - 13793.768: 86.0243% ( 17) 00:14:23.433 13793.768 - 13856.183: 86.1593% ( 14) 00:14:23.433 13856.183 - 13918.598: 86.2847% ( 13) 00:14:23.433 13918.598 - 13981.013: 86.4487% ( 17) 00:14:23.433 13981.013 - 14043.429: 86.5741% ( 13) 00:14:23.433 14043.429 - 14105.844: 86.7188% ( 15) 00:14:23.433 14105.844 - 14168.259: 86.8634% ( 15) 00:14:23.433 14168.259 - 14230.674: 87.0081% ( 15) 00:14:23.433 14230.674 - 14293.090: 87.2010% ( 20) 00:14:23.433 14293.090 - 14355.505: 87.3939% ( 20) 00:14:23.433 14355.505 - 14417.920: 87.6254% ( 24) 00:14:23.433 14417.920 - 14480.335: 87.8472% ( 23) 00:14:23.433 14480.335 - 14542.750: 88.0787% ( 24) 00:14:23.433 14542.750 - 14605.166: 88.3005% ( 23) 00:14:23.433 14605.166 - 14667.581: 88.5706% ( 28) 00:14:23.433 14667.581 - 14729.996: 88.8407% ( 28) 00:14:23.433 14729.996 - 14792.411: 89.1397% ( 31) 00:14:23.433 14792.411 - 14854.827: 89.4001% ( 27) 00:14:23.433 14854.827 - 14917.242: 89.6798% ( 29) 00:14:23.433 14917.242 - 14979.657: 89.9788% ( 31) 00:14:23.433 14979.657 - 15042.072: 90.2681% ( 30) 00:14:23.433 15042.072 - 15104.488: 90.5961% ( 34) 00:14:23.433 15104.488 - 15166.903: 90.9240% ( 34) 00:14:23.433 15166.903 - 15229.318: 91.2519% ( 34) 00:14:23.433 15229.318 - 15291.733: 91.5702% ( 33) 00:14:23.433 15291.733 - 15354.149: 91.8981% ( 34) 00:14:23.433 15354.149 - 15416.564: 92.1682% ( 28) 00:14:23.433 15416.564 - 15478.979: 92.3997% ( 24) 00:14:23.433 15478.979 - 15541.394: 92.6312% ( 24) 00:14:23.433 15541.394 - 15603.810: 92.8434% ( 22) 00:14:23.433 15603.810 - 15666.225: 93.0556% ( 22) 00:14:23.433 15666.225 - 15728.640: 93.3063% ( 26) 00:14:23.433 15728.640 - 15791.055: 93.5571% ( 26) 00:14:23.433 15791.055 - 15853.470: 93.7596% ( 21) 00:14:23.433 15853.470 - 15915.886: 94.0008% ( 25) 00:14:23.433 15915.886 - 15978.301: 94.2130% ( 22) 00:14:23.433 15978.301 - 16103.131: 94.6952% ( 50) 00:14:23.433 16103.131 - 16227.962: 95.1678% ( 49) 00:14:23.433 16227.962 - 16352.792: 95.6404% ( 49) 00:14:23.433 16352.792 - 16477.623: 96.0938% ( 47) 00:14:23.433 16477.623 - 16602.453: 96.6339% ( 56) 00:14:23.433 16602.453 - 16727.284: 97.1740% ( 56) 00:14:23.433 16727.284 - 16852.114: 97.6177% ( 46) 00:14:23.433 16852.114 - 16976.945: 97.9167% ( 31) 00:14:23.433 16976.945 - 17101.775: 98.1385% ( 23) 00:14:23.433 17101.775 - 17226.606: 98.2928% ( 16) 00:14:23.433 17226.606 - 17351.436: 98.4182% ( 13) 00:14:23.433 17351.436 - 17476.267: 98.5243% ( 11) 00:14:23.433 17476.267 - 17601.097: 98.6400% ( 12) 00:14:23.433 17601.097 - 17725.928: 98.6883% ( 5) 00:14:23.433 17725.928 - 17850.758: 98.7365% ( 5) 00:14:23.433 17850.758 - 17975.589: 98.7654% ( 3) 00:14:23.433 39696.091 - 39945.752: 98.7944% ( 3) 00:14:23.433 39945.752 - 40195.413: 98.8522% ( 6) 00:14:23.433 40195.413 - 40445.074: 98.9101% ( 6) 00:14:23.433 40445.074 - 40694.735: 98.9680% ( 6) 00:14:23.433 40694.735 - 40944.396: 99.0258% ( 6) 00:14:23.433 40944.396 - 41194.057: 99.0837% ( 6) 00:14:23.433 41194.057 - 41443.718: 99.1416% ( 6) 00:14:23.433 41443.718 - 41693.379: 99.1898% ( 5) 00:14:23.433 41693.379 - 41943.040: 99.2573% ( 7) 00:14:23.433 41943.040 - 42192.701: 99.3056% ( 5) 00:14:23.433 42192.701 - 42442.362: 99.3634% ( 6) 00:14:23.433 42442.362 - 42692.023: 99.3827% ( 2) 00:14:23.433 51929.478 - 52179.139: 99.3924% ( 1) 00:14:23.433 52179.139 - 52428.800: 99.4406% ( 5) 00:14:23.433 52428.800 - 52678.461: 99.4985% ( 6) 00:14:23.433 52678.461 - 52928.122: 99.5563% ( 6) 00:14:23.433 52928.122 - 53177.783: 99.6142% ( 6) 00:14:23.433 53177.783 - 53427.444: 99.6624% ( 5) 00:14:23.433 53427.444 - 53677.105: 99.7106% ( 5) 00:14:23.433 53677.105 - 53926.766: 99.7685% ( 6) 00:14:23.433 53926.766 - 54176.427: 99.8264% ( 6) 00:14:23.433 54176.427 - 54426.088: 99.8843% ( 6) 00:14:23.433 54426.088 - 54675.749: 99.9228% ( 4) 00:14:23.433 54675.749 - 54925.410: 99.9807% ( 6) 00:14:23.433 54925.410 - 55175.070: 100.0000% ( 2) 00:14:23.433 00:14:23.433 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:23.433 ============================================================================== 00:14:23.433 Range in us Cumulative IO count 00:14:23.433 8862.964 - 8925.379: 0.0386% ( 4) 00:14:23.433 8925.379 - 8987.794: 0.0965% ( 6) 00:14:23.433 8987.794 - 9050.210: 0.2025% ( 11) 00:14:23.433 9050.210 - 9112.625: 0.4533% ( 26) 00:14:23.433 9112.625 - 9175.040: 0.8584% ( 42) 00:14:23.433 9175.040 - 9237.455: 1.2346% ( 39) 00:14:23.433 9237.455 - 9299.870: 1.7168% ( 50) 00:14:23.433 9299.870 - 9362.286: 2.2184% ( 52) 00:14:23.433 9362.286 - 9424.701: 2.7874% ( 59) 00:14:23.433 9424.701 - 9487.116: 3.3468% ( 58) 00:14:23.433 9487.116 - 9549.531: 3.8870% ( 56) 00:14:23.433 9549.531 - 9611.947: 4.5621% ( 70) 00:14:23.433 9611.947 - 9674.362: 5.1698% ( 63) 00:14:23.433 9674.362 - 9736.777: 5.7967% ( 65) 00:14:23.433 9736.777 - 9799.192: 6.4622% ( 69) 00:14:23.433 9799.192 - 9861.608: 7.1181% ( 68) 00:14:23.433 9861.608 - 9924.023: 7.7160% ( 62) 00:14:23.433 9924.023 - 9986.438: 8.2562% ( 56) 00:14:23.433 9986.438 - 10048.853: 8.6709% ( 43) 00:14:23.433 10048.853 - 10111.269: 9.0278% ( 37) 00:14:23.433 10111.269 - 10173.684: 9.3461% ( 33) 00:14:23.433 10173.684 - 10236.099: 9.7222% ( 39) 00:14:23.433 10236.099 - 10298.514: 10.2238% ( 52) 00:14:23.433 10298.514 - 10360.930: 10.7446% ( 54) 00:14:23.433 10360.930 - 10423.345: 11.4776% ( 76) 00:14:23.433 10423.345 - 10485.760: 12.6447% ( 121) 00:14:23.433 10485.760 - 10548.175: 14.1011% ( 151) 00:14:23.433 10548.175 - 10610.590: 15.8758% ( 184) 00:14:23.433 10610.590 - 10673.006: 17.9302% ( 213) 00:14:23.433 10673.006 - 10735.421: 20.1775% ( 233) 00:14:23.433 10735.421 - 10797.836: 22.5502% ( 246) 00:14:23.433 10797.836 - 10860.251: 24.9228% ( 246) 00:14:23.433 10860.251 - 10922.667: 27.3920% ( 256) 00:14:23.433 10922.667 - 10985.082: 29.8418% ( 254) 00:14:23.433 10985.082 - 11047.497: 32.3302% ( 258) 00:14:23.433 11047.497 - 11109.912: 34.8283% ( 259) 00:14:23.433 11109.912 - 11172.328: 37.4132% ( 268) 00:14:23.433 11172.328 - 11234.743: 39.8920% ( 257) 00:14:23.433 11234.743 - 11297.158: 42.5444% ( 275) 00:14:23.433 11297.158 - 11359.573: 45.1775% ( 273) 00:14:23.433 11359.573 - 11421.989: 47.6755% ( 259) 00:14:23.433 11421.989 - 11484.404: 49.9807% ( 239) 00:14:23.433 11484.404 - 11546.819: 52.1701% ( 227) 00:14:23.433 11546.819 - 11609.234: 54.3017% ( 221) 00:14:23.433 11609.234 - 11671.650: 56.0764% ( 184) 00:14:23.433 11671.650 - 11734.065: 57.7546% ( 174) 00:14:23.433 11734.065 - 11796.480: 59.2785% ( 158) 00:14:23.433 11796.480 - 11858.895: 60.7446% ( 152) 00:14:23.433 11858.895 - 11921.310: 62.0081% ( 131) 00:14:23.433 11921.310 - 11983.726: 63.2137% ( 125) 00:14:23.433 11983.726 - 12046.141: 64.3519% ( 118) 00:14:23.433 12046.141 - 12108.556: 65.3453% ( 103) 00:14:23.433 12108.556 - 12170.971: 66.5509% ( 125) 00:14:23.434 12170.971 - 12233.387: 67.7758% ( 127) 00:14:23.434 12233.387 - 12295.802: 69.1551% ( 143) 00:14:23.434 12295.802 - 12358.217: 70.4572% ( 135) 00:14:23.434 12358.217 - 12420.632: 71.7207% ( 131) 00:14:23.434 12420.632 - 12483.048: 72.8588% ( 118) 00:14:23.434 12483.048 - 12545.463: 74.0258% ( 121) 00:14:23.434 12545.463 - 12607.878: 75.1640% ( 118) 00:14:23.434 12607.878 - 12670.293: 76.3214% ( 120) 00:14:23.434 12670.293 - 12732.709: 77.3823% ( 110) 00:14:23.434 12732.709 - 12795.124: 78.3951% ( 105) 00:14:23.434 12795.124 - 12857.539: 79.2535% ( 89) 00:14:23.434 12857.539 - 12919.954: 80.0829% ( 86) 00:14:23.434 12919.954 - 12982.370: 80.7967% ( 74) 00:14:23.434 12982.370 - 13044.785: 81.4718% ( 70) 00:14:23.434 13044.785 - 13107.200: 82.1084% ( 66) 00:14:23.434 13107.200 - 13169.615: 82.6775% ( 59) 00:14:23.434 13169.615 - 13232.030: 83.2079% ( 55) 00:14:23.434 13232.030 - 13294.446: 83.7481% ( 56) 00:14:23.434 13294.446 - 13356.861: 84.2014% ( 47) 00:14:23.434 13356.861 - 13419.276: 84.5486% ( 36) 00:14:23.434 13419.276 - 13481.691: 84.8669% ( 33) 00:14:23.434 13481.691 - 13544.107: 85.1659% ( 31) 00:14:23.434 13544.107 - 13606.522: 85.4167% ( 26) 00:14:23.434 13606.522 - 13668.937: 85.6481% ( 24) 00:14:23.434 13668.937 - 13731.352: 85.8700% ( 23) 00:14:23.434 13731.352 - 13793.768: 86.1111% ( 25) 00:14:23.434 13793.768 - 13856.183: 86.3040% ( 20) 00:14:23.434 13856.183 - 13918.598: 86.4680% ( 17) 00:14:23.434 13918.598 - 13981.013: 86.6512% ( 19) 00:14:23.434 13981.013 - 14043.429: 86.8441% ( 20) 00:14:23.434 14043.429 - 14105.844: 87.0563% ( 22) 00:14:23.434 14105.844 - 14168.259: 87.2492% ( 20) 00:14:23.434 14168.259 - 14230.674: 87.4132% ( 17) 00:14:23.434 14230.674 - 14293.090: 87.5965% ( 19) 00:14:23.434 14293.090 - 14355.505: 87.7604% ( 17) 00:14:23.434 14355.505 - 14417.920: 87.9244% ( 17) 00:14:23.434 14417.920 - 14480.335: 88.1752% ( 26) 00:14:23.434 14480.335 - 14542.750: 88.4066% ( 24) 00:14:23.434 14542.750 - 14605.166: 88.6960% ( 30) 00:14:23.434 14605.166 - 14667.581: 88.9950% ( 31) 00:14:23.434 14667.581 - 14729.996: 89.2843% ( 30) 00:14:23.434 14729.996 - 14792.411: 89.6412% ( 37) 00:14:23.434 14792.411 - 14854.827: 89.9981% ( 37) 00:14:23.434 14854.827 - 14917.242: 90.3549% ( 37) 00:14:23.434 14917.242 - 14979.657: 90.7022% ( 36) 00:14:23.434 14979.657 - 15042.072: 91.0494% ( 36) 00:14:23.434 15042.072 - 15104.488: 91.4255% ( 39) 00:14:23.434 15104.488 - 15166.903: 91.7245% ( 31) 00:14:23.434 15166.903 - 15229.318: 92.0621% ( 35) 00:14:23.434 15229.318 - 15291.733: 92.3708% ( 32) 00:14:23.434 15291.733 - 15354.149: 92.6890% ( 33) 00:14:23.434 15354.149 - 15416.564: 92.9784% ( 30) 00:14:23.434 15416.564 - 15478.979: 93.3063% ( 34) 00:14:23.434 15478.979 - 15541.394: 93.5860% ( 29) 00:14:23.434 15541.394 - 15603.810: 93.9043% ( 33) 00:14:23.434 15603.810 - 15666.225: 94.2226% ( 33) 00:14:23.434 15666.225 - 15728.640: 94.5023% ( 29) 00:14:23.434 15728.640 - 15791.055: 94.7338% ( 24) 00:14:23.434 15791.055 - 15853.470: 95.0424% ( 32) 00:14:23.434 15853.470 - 15915.886: 95.2932% ( 26) 00:14:23.434 15915.886 - 15978.301: 95.5247% ( 24) 00:14:23.434 15978.301 - 16103.131: 96.0359% ( 53) 00:14:23.434 16103.131 - 16227.962: 96.4506% ( 43) 00:14:23.434 16227.962 - 16352.792: 96.7303% ( 29) 00:14:23.434 16352.792 - 16477.623: 97.0293% ( 31) 00:14:23.434 16477.623 - 16602.453: 97.3669% ( 35) 00:14:23.434 16602.453 - 16727.284: 97.6948% ( 34) 00:14:23.434 16727.284 - 16852.114: 97.9842% ( 30) 00:14:23.434 16852.114 - 16976.945: 98.2542% ( 28) 00:14:23.434 16976.945 - 17101.775: 98.4375% ( 19) 00:14:23.434 17101.775 - 17226.606: 98.5436% ( 11) 00:14:23.434 17226.606 - 17351.436: 98.6111% ( 7) 00:14:23.434 17351.436 - 17476.267: 98.6786% ( 7) 00:14:23.434 17476.267 - 17601.097: 98.7461% ( 7) 00:14:23.434 17601.097 - 17725.928: 98.7654% ( 2) 00:14:23.434 37199.482 - 37449.143: 98.8040% ( 4) 00:14:23.434 37449.143 - 37698.804: 98.8522% ( 5) 00:14:23.434 37698.804 - 37948.465: 98.9101% ( 6) 00:14:23.434 37948.465 - 38198.126: 98.9680% ( 6) 00:14:23.434 38198.126 - 38447.787: 99.0258% ( 6) 00:14:23.434 38447.787 - 38697.448: 99.0741% ( 5) 00:14:23.434 38697.448 - 38947.109: 99.1319% ( 6) 00:14:23.434 38947.109 - 39196.770: 99.1995% ( 7) 00:14:23.434 39196.770 - 39446.430: 99.2573% ( 6) 00:14:23.434 39446.430 - 39696.091: 99.3056% ( 5) 00:14:23.434 39696.091 - 39945.752: 99.3731% ( 7) 00:14:23.434 39945.752 - 40195.413: 99.3827% ( 1) 00:14:23.434 48933.547 - 49183.208: 99.4406% ( 6) 00:14:23.434 49183.208 - 49432.869: 99.4985% ( 6) 00:14:23.434 49432.869 - 49682.530: 99.5467% ( 5) 00:14:23.434 49682.530 - 49932.190: 99.6142% ( 7) 00:14:23.434 49932.190 - 50181.851: 99.6721% ( 6) 00:14:23.434 50181.851 - 50431.512: 99.7299% ( 6) 00:14:23.434 50431.512 - 50681.173: 99.7878% ( 6) 00:14:23.434 50681.173 - 50930.834: 99.8457% ( 6) 00:14:23.434 50930.834 - 51180.495: 99.9035% ( 6) 00:14:23.434 51180.495 - 51430.156: 99.9614% ( 6) 00:14:23.434 51430.156 - 51679.817: 100.0000% ( 4) 00:14:23.434 00:14:23.434 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:23.434 ============================================================================== 00:14:23.434 Range in us Cumulative IO count 00:14:23.434 8862.964 - 8925.379: 0.0289% ( 3) 00:14:23.434 8925.379 - 8987.794: 0.1061% ( 8) 00:14:23.434 8987.794 - 9050.210: 0.2701% ( 17) 00:14:23.434 9050.210 - 9112.625: 0.5498% ( 29) 00:14:23.434 9112.625 - 9175.040: 0.9934% ( 46) 00:14:23.434 9175.040 - 9237.455: 1.3407% ( 36) 00:14:23.434 9237.455 - 9299.870: 1.7843% ( 46) 00:14:23.434 9299.870 - 9362.286: 2.3148% ( 55) 00:14:23.434 9362.286 - 9424.701: 2.8839% ( 59) 00:14:23.434 9424.701 - 9487.116: 3.4626% ( 60) 00:14:23.434 9487.116 - 9549.531: 4.0799% ( 64) 00:14:23.434 9549.531 - 9611.947: 4.7068% ( 65) 00:14:23.434 9611.947 - 9674.362: 5.3337% ( 65) 00:14:23.434 9674.362 - 9736.777: 5.9317% ( 62) 00:14:23.434 9736.777 - 9799.192: 6.5972% ( 69) 00:14:23.434 9799.192 - 9861.608: 7.2242% ( 65) 00:14:23.434 9861.608 - 9924.023: 7.8125% ( 61) 00:14:23.434 9924.023 - 9986.438: 8.2948% ( 50) 00:14:23.434 9986.438 - 10048.853: 8.7577% ( 48) 00:14:23.434 10048.853 - 10111.269: 9.1725% ( 43) 00:14:23.434 10111.269 - 10173.684: 9.5968% ( 44) 00:14:23.434 10173.684 - 10236.099: 10.0212% ( 44) 00:14:23.434 10236.099 - 10298.514: 10.4070% ( 40) 00:14:23.434 10298.514 - 10360.930: 10.9086% ( 52) 00:14:23.434 10360.930 - 10423.345: 11.6223% ( 74) 00:14:23.434 10423.345 - 10485.760: 12.7797% ( 120) 00:14:23.434 10485.760 - 10548.175: 14.2361% ( 151) 00:14:23.434 10548.175 - 10610.590: 16.0012% ( 183) 00:14:23.434 10610.590 - 10673.006: 18.0073% ( 208) 00:14:23.434 10673.006 - 10735.421: 20.2643% ( 234) 00:14:23.434 10735.421 - 10797.836: 22.4344% ( 225) 00:14:23.434 10797.836 - 10860.251: 24.9614% ( 262) 00:14:23.434 10860.251 - 10922.667: 27.3148% ( 244) 00:14:23.434 10922.667 - 10985.082: 29.8900% ( 267) 00:14:23.434 10985.082 - 11047.497: 32.2338% ( 243) 00:14:23.434 11047.497 - 11109.912: 34.9055% ( 277) 00:14:23.434 11109.912 - 11172.328: 37.2975% ( 248) 00:14:23.434 11172.328 - 11234.743: 39.9306% ( 273) 00:14:23.434 11234.743 - 11297.158: 42.4961% ( 266) 00:14:23.434 11297.158 - 11359.573: 45.1871% ( 279) 00:14:23.434 11359.573 - 11421.989: 47.6080% ( 251) 00:14:23.434 11421.989 - 11484.404: 50.0579% ( 254) 00:14:23.434 11484.404 - 11546.819: 52.3341% ( 236) 00:14:23.434 11546.819 - 11609.234: 54.3981% ( 214) 00:14:23.434 11609.234 - 11671.650: 56.3175% ( 199) 00:14:23.434 11671.650 - 11734.065: 58.0054% ( 175) 00:14:23.434 11734.065 - 11796.480: 59.5293% ( 158) 00:14:23.434 11796.480 - 11858.895: 60.9182% ( 144) 00:14:23.434 11858.895 - 11921.310: 62.3167% ( 145) 00:14:23.434 11921.310 - 11983.726: 63.5899% ( 132) 00:14:23.434 11983.726 - 12046.141: 64.6894% ( 114) 00:14:23.434 12046.141 - 12108.556: 65.8854% ( 124) 00:14:23.434 12108.556 - 12170.971: 67.0814% ( 124) 00:14:23.434 12170.971 - 12233.387: 68.3353% ( 130) 00:14:23.434 12233.387 - 12295.802: 69.6470% ( 136) 00:14:23.434 12295.802 - 12358.217: 70.9780% ( 138) 00:14:23.434 12358.217 - 12420.632: 72.2029% ( 127) 00:14:23.434 12420.632 - 12483.048: 73.3314% ( 117) 00:14:23.434 12483.048 - 12545.463: 74.4695% ( 118) 00:14:23.434 12545.463 - 12607.878: 75.5787% ( 115) 00:14:23.434 12607.878 - 12670.293: 76.6397% ( 110) 00:14:23.435 12670.293 - 12732.709: 77.6427% ( 104) 00:14:23.435 12732.709 - 12795.124: 78.5976% ( 99) 00:14:23.435 12795.124 - 12857.539: 79.4560% ( 89) 00:14:23.435 12857.539 - 12919.954: 80.1794% ( 75) 00:14:23.435 12919.954 - 12982.370: 80.7967% ( 64) 00:14:23.435 12982.370 - 13044.785: 81.4236% ( 65) 00:14:23.435 13044.785 - 13107.200: 81.9155% ( 51) 00:14:23.435 13107.200 - 13169.615: 82.3881% ( 49) 00:14:23.435 13169.615 - 13232.030: 82.8704% ( 50) 00:14:23.435 13232.030 - 13294.446: 83.3140% ( 46) 00:14:23.435 13294.446 - 13356.861: 83.6806% ( 38) 00:14:23.435 13356.861 - 13419.276: 83.9796% ( 31) 00:14:23.435 13419.276 - 13481.691: 84.2882% ( 32) 00:14:23.435 13481.691 - 13544.107: 84.6065% ( 33) 00:14:23.435 13544.107 - 13606.522: 84.9151% ( 32) 00:14:23.435 13606.522 - 13668.937: 85.1755% ( 27) 00:14:23.435 13668.937 - 13731.352: 85.4070% ( 24) 00:14:23.435 13731.352 - 13793.768: 85.6385% ( 24) 00:14:23.435 13793.768 - 13856.183: 85.8893% ( 26) 00:14:23.435 13856.183 - 13918.598: 86.1208% ( 24) 00:14:23.435 13918.598 - 13981.013: 86.3812% ( 27) 00:14:23.435 13981.013 - 14043.429: 86.6512% ( 28) 00:14:23.435 14043.429 - 14105.844: 86.9020% ( 26) 00:14:23.435 14105.844 - 14168.259: 87.2396% ( 35) 00:14:23.435 14168.259 - 14230.674: 87.4904% ( 26) 00:14:23.435 14230.674 - 14293.090: 87.7701% ( 29) 00:14:23.435 14293.090 - 14355.505: 88.0305% ( 27) 00:14:23.435 14355.505 - 14417.920: 88.3584% ( 34) 00:14:23.435 14417.920 - 14480.335: 88.6574% ( 31) 00:14:23.435 14480.335 - 14542.750: 88.9757% ( 33) 00:14:23.435 14542.750 - 14605.166: 89.2265% ( 26) 00:14:23.435 14605.166 - 14667.581: 89.5448% ( 33) 00:14:23.435 14667.581 - 14729.996: 89.8052% ( 27) 00:14:23.435 14729.996 - 14792.411: 90.0849% ( 29) 00:14:23.435 14792.411 - 14854.827: 90.3646% ( 29) 00:14:23.435 14854.827 - 14917.242: 90.6443% ( 29) 00:14:23.435 14917.242 - 14979.657: 90.9047% ( 27) 00:14:23.435 14979.657 - 15042.072: 91.1651% ( 27) 00:14:23.435 15042.072 - 15104.488: 91.4352% ( 28) 00:14:23.435 15104.488 - 15166.903: 91.7052% ( 28) 00:14:23.435 15166.903 - 15229.318: 91.9753% ( 28) 00:14:23.435 15229.318 - 15291.733: 92.2743% ( 31) 00:14:23.435 15291.733 - 15354.149: 92.6119% ( 35) 00:14:23.435 15354.149 - 15416.564: 92.9302% ( 33) 00:14:23.435 15416.564 - 15478.979: 93.2677% ( 35) 00:14:23.435 15478.979 - 15541.394: 93.6053% ( 35) 00:14:23.435 15541.394 - 15603.810: 93.8465% ( 25) 00:14:23.435 15603.810 - 15666.225: 94.1165% ( 28) 00:14:23.435 15666.225 - 15728.640: 94.3480% ( 24) 00:14:23.435 15728.640 - 15791.055: 94.5795% ( 24) 00:14:23.435 15791.055 - 15853.470: 94.8206% ( 25) 00:14:23.435 15853.470 - 15915.886: 95.0231% ( 21) 00:14:23.435 15915.886 - 15978.301: 95.1968% ( 18) 00:14:23.435 15978.301 - 16103.131: 95.5247% ( 34) 00:14:23.435 16103.131 - 16227.962: 95.9105% ( 40) 00:14:23.435 16227.962 - 16352.792: 96.3156% ( 42) 00:14:23.435 16352.792 - 16477.623: 96.7207% ( 42) 00:14:23.435 16477.623 - 16602.453: 97.0968% ( 39) 00:14:23.435 16602.453 - 16727.284: 97.4151% ( 33) 00:14:23.435 16727.284 - 16852.114: 97.7141% ( 31) 00:14:23.435 16852.114 - 16976.945: 98.0324% ( 33) 00:14:23.435 16976.945 - 17101.775: 98.3410% ( 32) 00:14:23.435 17101.775 - 17226.606: 98.5243% ( 19) 00:14:23.435 17226.606 - 17351.436: 98.6400% ( 12) 00:14:23.435 17351.436 - 17476.267: 98.7076% ( 7) 00:14:23.435 17476.267 - 17601.097: 98.7654% ( 6) 00:14:23.435 34453.211 - 34702.872: 98.8233% ( 6) 00:14:23.435 34702.872 - 34952.533: 98.8715% ( 5) 00:14:23.435 34952.533 - 35202.194: 98.9294% ( 6) 00:14:23.435 35202.194 - 35451.855: 98.9776% ( 5) 00:14:23.435 35451.855 - 35701.516: 99.0451% ( 7) 00:14:23.435 35701.516 - 35951.177: 99.0934% ( 5) 00:14:23.435 35951.177 - 36200.838: 99.1512% ( 6) 00:14:23.435 36200.838 - 36450.499: 99.2091% ( 6) 00:14:23.435 36450.499 - 36700.160: 99.2670% ( 6) 00:14:23.435 36700.160 - 36949.821: 99.3248% ( 6) 00:14:23.435 36949.821 - 37199.482: 99.3827% ( 6) 00:14:23.435 45687.954 - 45937.615: 99.4309% ( 5) 00:14:23.435 45937.615 - 46187.276: 99.4888% ( 6) 00:14:23.435 46187.276 - 46436.937: 99.5370% ( 5) 00:14:23.435 46436.937 - 46686.598: 99.5949% ( 6) 00:14:23.435 46686.598 - 46936.259: 99.6624% ( 7) 00:14:23.435 46936.259 - 47185.920: 99.7203% ( 6) 00:14:23.435 47185.920 - 47435.581: 99.7782% ( 6) 00:14:23.435 47435.581 - 47685.242: 99.8360% ( 6) 00:14:23.435 47685.242 - 47934.903: 99.8843% ( 5) 00:14:23.435 47934.903 - 48184.564: 99.9421% ( 6) 00:14:23.435 48184.564 - 48434.225: 100.0000% ( 6) 00:14:23.435 00:14:23.435 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:23.435 ============================================================================== 00:14:23.435 Range in us Cumulative IO count 00:14:23.435 8862.964 - 8925.379: 0.0383% ( 4) 00:14:23.435 8925.379 - 8987.794: 0.1725% ( 14) 00:14:23.435 8987.794 - 9050.210: 0.3834% ( 22) 00:14:23.435 9050.210 - 9112.625: 0.6710% ( 30) 00:14:23.435 9112.625 - 9175.040: 1.0065% ( 35) 00:14:23.435 9175.040 - 9237.455: 1.4475% ( 46) 00:14:23.435 9237.455 - 9299.870: 1.9076% ( 48) 00:14:23.435 9299.870 - 9362.286: 2.4252% ( 54) 00:14:23.435 9362.286 - 9424.701: 2.9716% ( 57) 00:14:23.435 9424.701 - 9487.116: 3.5468% ( 60) 00:14:23.435 9487.116 - 9549.531: 4.1699% ( 65) 00:14:23.435 9549.531 - 9611.947: 4.7929% ( 65) 00:14:23.435 9611.947 - 9674.362: 5.4352% ( 67) 00:14:23.435 9674.362 - 9736.777: 6.0199% ( 61) 00:14:23.435 9736.777 - 9799.192: 6.7005% ( 71) 00:14:23.435 9799.192 - 9861.608: 7.2757% ( 60) 00:14:23.435 9861.608 - 9924.023: 7.8413% ( 59) 00:14:23.435 9924.023 - 9986.438: 8.3301% ( 51) 00:14:23.435 9986.438 - 10048.853: 8.8094% ( 50) 00:14:23.435 10048.853 - 10111.269: 9.2216% ( 43) 00:14:23.435 10111.269 - 10173.684: 9.6146% ( 41) 00:14:23.435 10173.684 - 10236.099: 10.0364% ( 44) 00:14:23.435 10236.099 - 10298.514: 10.4582% ( 44) 00:14:23.435 10298.514 - 10360.930: 10.9854% ( 55) 00:14:23.435 10360.930 - 10423.345: 11.6469% ( 69) 00:14:23.435 10423.345 - 10485.760: 12.8451% ( 125) 00:14:23.435 10485.760 - 10548.175: 14.2734% ( 149) 00:14:23.435 10548.175 - 10610.590: 15.9317% ( 173) 00:14:23.435 10610.590 - 10673.006: 17.8106% ( 196) 00:14:23.435 10673.006 - 10735.421: 20.0441% ( 233) 00:14:23.436 10735.421 - 10797.836: 22.3735% ( 243) 00:14:23.436 10797.836 - 10860.251: 24.6933% ( 242) 00:14:23.436 10860.251 - 10922.667: 27.1185% ( 253) 00:14:23.436 10922.667 - 10985.082: 29.4479% ( 243) 00:14:23.436 10985.082 - 11047.497: 31.8731% ( 253) 00:14:23.436 11047.497 - 11109.912: 34.2887% ( 252) 00:14:23.436 11109.912 - 11172.328: 36.8194% ( 264) 00:14:23.436 11172.328 - 11234.743: 39.3213% ( 261) 00:14:23.436 11234.743 - 11297.158: 42.0437% ( 284) 00:14:23.436 11297.158 - 11359.573: 44.5744% ( 264) 00:14:23.436 11359.573 - 11421.989: 47.1242% ( 266) 00:14:23.436 11421.989 - 11484.404: 49.5303% ( 251) 00:14:23.436 11484.404 - 11546.819: 51.9459% ( 252) 00:14:23.436 11546.819 - 11609.234: 54.1315% ( 228) 00:14:23.436 11609.234 - 11671.650: 55.8953% ( 184) 00:14:23.436 11671.650 - 11734.065: 57.5824% ( 176) 00:14:23.436 11734.065 - 11796.480: 59.0491% ( 153) 00:14:23.436 11796.480 - 11858.895: 60.4294% ( 144) 00:14:23.436 11858.895 - 11921.310: 61.6948% ( 132) 00:14:23.436 11921.310 - 11983.726: 62.9314% ( 129) 00:14:23.436 11983.726 - 12046.141: 64.0433% ( 116) 00:14:23.436 12046.141 - 12108.556: 65.2032% ( 121) 00:14:23.436 12108.556 - 12170.971: 66.3823% ( 123) 00:14:23.436 12170.971 - 12233.387: 67.6668% ( 134) 00:14:23.436 12233.387 - 12295.802: 68.9513% ( 134) 00:14:23.436 12295.802 - 12358.217: 70.3317% ( 144) 00:14:23.436 12358.217 - 12420.632: 71.5970% ( 132) 00:14:23.436 12420.632 - 12483.048: 72.7473% ( 120) 00:14:23.436 12483.048 - 12545.463: 73.9168% ( 122) 00:14:23.436 12545.463 - 12607.878: 75.0383% ( 117) 00:14:23.436 12607.878 - 12670.293: 76.1120% ( 112) 00:14:23.436 12670.293 - 12732.709: 77.0322% ( 96) 00:14:23.436 12732.709 - 12795.124: 77.9333% ( 94) 00:14:23.436 12795.124 - 12857.539: 78.8152% ( 92) 00:14:23.436 12857.539 - 12919.954: 79.6683% ( 89) 00:14:23.436 12919.954 - 12982.370: 80.4544% ( 82) 00:14:23.436 12982.370 - 13044.785: 81.2117% ( 79) 00:14:23.436 13044.785 - 13107.200: 81.8731% ( 69) 00:14:23.436 13107.200 - 13169.615: 82.4866% ( 64) 00:14:23.436 13169.615 - 13232.030: 83.0521% ( 59) 00:14:23.436 13232.030 - 13294.446: 83.5123% ( 48) 00:14:23.436 13294.446 - 13356.861: 83.9436% ( 45) 00:14:23.436 13356.861 - 13419.276: 84.3175% ( 39) 00:14:23.436 13419.276 - 13481.691: 84.6722% ( 37) 00:14:23.436 13481.691 - 13544.107: 84.9406% ( 28) 00:14:23.436 13544.107 - 13606.522: 85.2186% ( 29) 00:14:23.436 13606.522 - 13668.937: 85.4870% ( 28) 00:14:23.436 13668.937 - 13731.352: 85.6979% ( 22) 00:14:23.436 13731.352 - 13793.768: 85.9279% ( 24) 00:14:23.436 13793.768 - 13856.183: 86.1484% ( 23) 00:14:23.436 13856.183 - 13918.598: 86.3209% ( 18) 00:14:23.436 13918.598 - 13981.013: 86.5606% ( 25) 00:14:23.436 13981.013 - 14043.429: 86.7619% ( 21) 00:14:23.436 14043.429 - 14105.844: 86.9919% ( 24) 00:14:23.436 14105.844 - 14168.259: 87.1837% ( 20) 00:14:23.436 14168.259 - 14230.674: 87.4521% ( 28) 00:14:23.436 14230.674 - 14293.090: 87.6630% ( 22) 00:14:23.436 14293.090 - 14355.505: 87.9218% ( 27) 00:14:23.436 14355.505 - 14417.920: 88.1614% ( 25) 00:14:23.436 14417.920 - 14480.335: 88.4394% ( 29) 00:14:23.436 14480.335 - 14542.750: 88.7558% ( 33) 00:14:23.436 14542.750 - 14605.166: 89.0337% ( 29) 00:14:23.436 14605.166 - 14667.581: 89.3213% ( 30) 00:14:23.436 14667.581 - 14729.996: 89.5897% ( 28) 00:14:23.436 14729.996 - 14792.411: 89.8677% ( 29) 00:14:23.436 14792.411 - 14854.827: 90.0882% ( 23) 00:14:23.436 14854.827 - 14917.242: 90.3374% ( 26) 00:14:23.436 14917.242 - 14979.657: 90.5675% ( 24) 00:14:23.436 14979.657 - 15042.072: 90.8455% ( 29) 00:14:23.436 15042.072 - 15104.488: 91.0660% ( 23) 00:14:23.436 15104.488 - 15166.903: 91.3439% ( 29) 00:14:23.436 15166.903 - 15229.318: 91.6123% ( 28) 00:14:23.436 15229.318 - 15291.733: 91.8808% ( 28) 00:14:23.436 15291.733 - 15354.149: 92.1779% ( 31) 00:14:23.436 15354.149 - 15416.564: 92.4176% ( 25) 00:14:23.436 15416.564 - 15478.979: 92.6668% ( 26) 00:14:23.436 15478.979 - 15541.394: 92.9064% ( 25) 00:14:23.436 15541.394 - 15603.810: 93.1077% ( 21) 00:14:23.436 15603.810 - 15666.225: 93.2515% ( 15) 00:14:23.436 15666.225 - 15728.640: 93.3666% ( 12) 00:14:23.436 15728.640 - 15791.055: 93.4624% ( 10) 00:14:23.436 15791.055 - 15853.470: 93.5870% ( 13) 00:14:23.436 15853.470 - 15915.886: 93.6829% ( 10) 00:14:23.436 15915.886 - 15978.301: 93.8363% ( 16) 00:14:23.436 15978.301 - 16103.131: 94.2197% ( 40) 00:14:23.436 16103.131 - 16227.962: 94.6319% ( 43) 00:14:23.436 16227.962 - 16352.792: 95.0729% ( 46) 00:14:23.436 16352.792 - 16477.623: 95.5330% ( 48) 00:14:23.436 16477.623 - 16602.453: 95.9643% ( 45) 00:14:23.436 16602.453 - 16727.284: 96.4053% ( 46) 00:14:23.436 16727.284 - 16852.114: 96.8367% ( 45) 00:14:23.436 16852.114 - 16976.945: 97.2680% ( 45) 00:14:23.436 16976.945 - 17101.775: 97.6610% ( 41) 00:14:23.436 17101.775 - 17226.606: 97.8528% ( 20) 00:14:23.436 17226.606 - 17351.436: 97.9007% ( 5) 00:14:23.436 17351.436 - 17476.267: 97.9390% ( 4) 00:14:23.436 17476.267 - 17601.097: 97.9774% ( 4) 00:14:23.436 17601.097 - 17725.928: 98.0157% ( 4) 00:14:23.436 17725.928 - 17850.758: 98.0637% ( 5) 00:14:23.436 17850.758 - 17975.589: 98.1595% ( 10) 00:14:23.436 17975.589 - 18100.419: 98.2745% ( 12) 00:14:23.436 18100.419 - 18225.250: 98.3704% ( 10) 00:14:23.436 18225.250 - 18350.080: 98.4375% ( 7) 00:14:23.436 18350.080 - 18474.910: 98.4854% ( 5) 00:14:23.436 18474.910 - 18599.741: 98.5525% ( 7) 00:14:23.436 18599.741 - 18724.571: 98.6196% ( 7) 00:14:23.436 18724.571 - 18849.402: 98.6867% ( 7) 00:14:23.436 18849.402 - 18974.232: 98.7538% ( 7) 00:14:23.436 18974.232 - 19099.063: 98.7730% ( 2) 00:14:23.436 21845.333 - 21970.164: 98.8018% ( 3) 00:14:23.436 21970.164 - 22094.994: 98.8305% ( 3) 00:14:23.436 22094.994 - 22219.825: 98.8593% ( 3) 00:14:23.436 22219.825 - 22344.655: 98.8880% ( 3) 00:14:23.436 22344.655 - 22469.486: 98.9168% ( 3) 00:14:23.436 22469.486 - 22594.316: 98.9360% ( 2) 00:14:23.436 22594.316 - 22719.147: 98.9743% ( 4) 00:14:23.436 22719.147 - 22843.977: 99.0031% ( 3) 00:14:23.436 22843.977 - 22968.808: 99.0222% ( 2) 00:14:23.436 22968.808 - 23093.638: 99.0510% ( 3) 00:14:23.436 23093.638 - 23218.469: 99.0798% ( 3) 00:14:23.436 23218.469 - 23343.299: 99.1085% ( 3) 00:14:23.436 23343.299 - 23468.130: 99.1277% ( 2) 00:14:23.436 23468.130 - 23592.960: 99.1660% ( 4) 00:14:23.436 23592.960 - 23717.790: 99.1948% ( 3) 00:14:23.436 23717.790 - 23842.621: 99.2140% ( 2) 00:14:23.436 23842.621 - 23967.451: 99.2523% ( 4) 00:14:23.436 23967.451 - 24092.282: 99.2715% ( 2) 00:14:23.436 24092.282 - 24217.112: 99.3098% ( 4) 00:14:23.436 24217.112 - 24341.943: 99.3290% ( 2) 00:14:23.436 24341.943 - 24466.773: 99.3673% ( 4) 00:14:23.436 24466.773 - 24591.604: 99.3865% ( 2) 00:14:23.436 33704.229 - 33953.890: 99.3961% ( 1) 00:14:23.436 33953.890 - 34203.550: 99.4536% ( 6) 00:14:23.436 34203.550 - 34453.211: 99.5111% ( 6) 00:14:23.436 34453.211 - 34702.872: 99.5686% ( 6) 00:14:23.436 34702.872 - 34952.533: 99.6262% ( 6) 00:14:23.436 34952.533 - 35202.194: 99.6837% ( 6) 00:14:23.436 35202.194 - 35451.855: 99.7412% ( 6) 00:14:23.436 35451.855 - 35701.516: 99.7987% ( 6) 00:14:23.436 35701.516 - 35951.177: 99.8562% ( 6) 00:14:23.436 35951.177 - 36200.838: 99.9233% ( 7) 00:14:23.436 36200.838 - 36450.499: 99.9712% ( 5) 00:14:23.436 36450.499 - 36700.160: 100.0000% ( 3) 00:14:23.436 00:14:23.436 15:26:09 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:24.817 Initializing NVMe Controllers 00:14:24.817 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:24.817 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:24.817 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:24.817 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:24.817 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:24.817 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:24.817 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:24.817 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:24.817 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:24.817 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:24.817 Initialization complete. Launching workers. 00:14:24.817 ======================================================== 00:14:24.817 Latency(us) 00:14:24.817 Device Information : IOPS MiB/s Average min max 00:14:24.817 PCIE (0000:00:10.0) NSID 1 from core 0: 10385.76 121.71 12371.63 10095.50 40304.75 00:14:24.817 PCIE (0000:00:11.0) NSID 1 from core 0: 10385.76 121.71 12356.80 10263.31 38203.16 00:14:24.817 PCIE (0000:00:13.0) NSID 1 from core 0: 10385.76 121.71 12341.27 10197.82 37226.22 00:14:24.817 PCIE (0000:00:12.0) NSID 1 from core 0: 10385.76 121.71 12325.37 10186.22 35761.59 00:14:24.817 PCIE (0000:00:12.0) NSID 2 from core 0: 10385.76 121.71 12309.99 10116.53 34040.27 00:14:24.817 PCIE (0000:00:12.0) NSID 3 from core 0: 10449.48 122.45 12219.75 10083.10 25455.02 00:14:24.817 ======================================================== 00:14:24.817 Total : 62378.29 731.00 12320.70 10083.10 40304.75 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10360.930us 00:14:24.817 10.00000% : 10922.667us 00:14:24.817 25.00000% : 11359.573us 00:14:24.817 50.00000% : 11921.310us 00:14:24.817 75.00000% : 12545.463us 00:14:24.817 90.00000% : 13793.768us 00:14:24.817 95.00000% : 15166.903us 00:14:24.817 98.00000% : 16227.962us 00:14:24.817 99.00000% : 31706.941us 00:14:24.817 99.50000% : 38697.448us 00:14:24.817 99.90000% : 39945.752us 00:14:24.817 99.99000% : 40445.074us 00:14:24.817 99.99900% : 40445.074us 00:14:24.817 99.99990% : 40445.074us 00:14:24.817 99.99999% : 40445.074us 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10548.175us 00:14:24.817 10.00000% : 10985.082us 00:14:24.817 25.00000% : 11421.989us 00:14:24.817 50.00000% : 11858.895us 00:14:24.817 75.00000% : 12483.048us 00:14:24.817 90.00000% : 13918.598us 00:14:24.817 95.00000% : 15042.072us 00:14:24.817 98.00000% : 16477.623us 00:14:24.817 99.00000% : 29959.314us 00:14:24.817 99.50000% : 36949.821us 00:14:24.817 99.90000% : 37948.465us 00:14:24.817 99.99000% : 38198.126us 00:14:24.817 99.99900% : 38447.787us 00:14:24.817 99.99990% : 38447.787us 00:14:24.817 99.99999% : 38447.787us 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10548.175us 00:14:24.817 10.00000% : 10985.082us 00:14:24.817 25.00000% : 11359.573us 00:14:24.817 50.00000% : 11921.310us 00:14:24.817 75.00000% : 12483.048us 00:14:24.817 90.00000% : 13856.183us 00:14:24.817 95.00000% : 15104.488us 00:14:24.817 98.00000% : 16227.962us 00:14:24.817 99.00000% : 28586.179us 00:14:24.817 99.50000% : 35701.516us 00:14:24.817 99.90000% : 36949.821us 00:14:24.817 99.99000% : 37199.482us 00:14:24.817 99.99900% : 37449.143us 00:14:24.817 99.99990% : 37449.143us 00:14:24.817 99.99999% : 37449.143us 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10485.760us 00:14:24.817 10.00000% : 10985.082us 00:14:24.817 25.00000% : 11421.989us 00:14:24.817 50.00000% : 11921.310us 00:14:24.817 75.00000% : 12420.632us 00:14:24.817 90.00000% : 13981.013us 00:14:24.817 95.00000% : 15104.488us 00:14:24.817 98.00000% : 16477.623us 00:14:24.817 99.00000% : 26713.722us 00:14:24.817 99.50000% : 34203.550us 00:14:24.817 99.90000% : 35451.855us 00:14:24.817 99.99000% : 35951.177us 00:14:24.817 99.99900% : 35951.177us 00:14:24.817 99.99990% : 35951.177us 00:14:24.817 99.99999% : 35951.177us 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10548.175us 00:14:24.817 10.00000% : 10985.082us 00:14:24.817 25.00000% : 11421.989us 00:14:24.817 50.00000% : 11921.310us 00:14:24.817 75.00000% : 12483.048us 00:14:24.817 90.00000% : 14043.429us 00:14:24.817 95.00000% : 15229.318us 00:14:24.817 98.00000% : 16352.792us 00:14:24.817 99.00000% : 24716.434us 00:14:24.817 99.50000% : 32455.924us 00:14:24.817 99.90000% : 33704.229us 00:14:24.817 99.99000% : 34203.550us 00:14:24.817 99.99900% : 34203.550us 00:14:24.817 99.99990% : 34203.550us 00:14:24.817 99.99999% : 34203.550us 00:14:24.817 00:14:24.817 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:24.817 ================================================================================= 00:14:24.817 1.00000% : 10548.175us 00:14:24.817 10.00000% : 10985.082us 00:14:24.817 25.00000% : 11359.573us 00:14:24.817 50.00000% : 11921.310us 00:14:24.817 75.00000% : 12483.048us 00:14:24.817 90.00000% : 13856.183us 00:14:24.817 95.00000% : 15229.318us 00:14:24.817 98.00000% : 16352.792us 00:14:24.817 99.00000% : 18225.250us 00:14:24.817 99.50000% : 23967.451us 00:14:24.818 99.90000% : 25215.756us 00:14:24.818 99.99000% : 25465.417us 00:14:24.818 99.99900% : 25465.417us 00:14:24.818 99.99990% : 25465.417us 00:14:24.818 99.99999% : 25465.417us 00:14:24.818 00:14:24.818 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:24.818 ============================================================================== 00:14:24.818 Range in us Cumulative IO count 00:14:24.818 10048.853 - 10111.269: 0.1054% ( 11) 00:14:24.818 10111.269 - 10173.684: 0.3163% ( 22) 00:14:24.818 10173.684 - 10236.099: 0.4697% ( 16) 00:14:24.818 10236.099 - 10298.514: 0.8436% ( 39) 00:14:24.818 10298.514 - 10360.930: 1.3420% ( 52) 00:14:24.818 10360.930 - 10423.345: 2.1952% ( 89) 00:14:24.818 10423.345 - 10485.760: 3.0483% ( 89) 00:14:24.818 10485.760 - 10548.175: 3.7768% ( 76) 00:14:24.818 10548.175 - 10610.590: 4.7834% ( 105) 00:14:24.818 10610.590 - 10673.006: 5.6844% ( 94) 00:14:24.818 10673.006 - 10735.421: 6.6334% ( 99) 00:14:24.818 10735.421 - 10797.836: 7.6016% ( 101) 00:14:24.818 10797.836 - 10860.251: 8.9820% ( 144) 00:14:24.818 10860.251 - 10922.667: 10.4486% ( 153) 00:14:24.818 10922.667 - 10985.082: 12.6725% ( 232) 00:14:24.818 10985.082 - 11047.497: 14.6185% ( 203) 00:14:24.818 11047.497 - 11109.912: 16.7849% ( 226) 00:14:24.818 11109.912 - 11172.328: 19.2005% ( 252) 00:14:24.818 11172.328 - 11234.743: 21.5107% ( 241) 00:14:24.818 11234.743 - 11297.158: 23.7347% ( 232) 00:14:24.818 11297.158 - 11359.573: 26.1887% ( 256) 00:14:24.818 11359.573 - 11421.989: 28.5372% ( 245) 00:14:24.818 11421.989 - 11484.404: 31.2692% ( 285) 00:14:24.818 11484.404 - 11546.819: 34.2408% ( 310) 00:14:24.818 11546.819 - 11609.234: 37.6054% ( 351) 00:14:24.818 11609.234 - 11671.650: 40.7975% ( 333) 00:14:24.818 11671.650 - 11734.065: 43.8459% ( 318) 00:14:24.818 11734.065 - 11796.480: 46.8462% ( 313) 00:14:24.818 11796.480 - 11858.895: 49.6645% ( 294) 00:14:24.818 11858.895 - 11921.310: 52.5690% ( 303) 00:14:24.818 11921.310 - 11983.726: 55.0613% ( 260) 00:14:24.818 11983.726 - 12046.141: 57.7742% ( 283) 00:14:24.818 12046.141 - 12108.556: 60.3144% ( 265) 00:14:24.818 12108.556 - 12170.971: 62.7972% ( 259) 00:14:24.818 12170.971 - 12233.387: 65.1745% ( 248) 00:14:24.818 12233.387 - 12295.802: 67.4942% ( 242) 00:14:24.818 12295.802 - 12358.217: 69.6990% ( 230) 00:14:24.818 12358.217 - 12420.632: 71.7025% ( 209) 00:14:24.818 12420.632 - 12483.048: 73.4663% ( 184) 00:14:24.818 12483.048 - 12545.463: 75.2780% ( 189) 00:14:24.818 12545.463 - 12607.878: 76.8980% ( 169) 00:14:24.818 12607.878 - 12670.293: 78.4126% ( 158) 00:14:24.818 12670.293 - 12732.709: 79.7163% ( 136) 00:14:24.818 12732.709 - 12795.124: 81.0391% ( 138) 00:14:24.818 12795.124 - 12857.539: 82.2949% ( 131) 00:14:24.818 12857.539 - 12919.954: 83.4835% ( 124) 00:14:24.818 12919.954 - 12982.370: 84.2983% ( 85) 00:14:24.818 12982.370 - 13044.785: 85.0077% ( 74) 00:14:24.818 13044.785 - 13107.200: 85.6403% ( 66) 00:14:24.818 13107.200 - 13169.615: 86.1963% ( 58) 00:14:24.818 13169.615 - 13232.030: 86.7715% ( 60) 00:14:24.818 13232.030 - 13294.446: 87.2412% ( 49) 00:14:24.818 13294.446 - 13356.861: 87.6438% ( 42) 00:14:24.818 13356.861 - 13419.276: 88.0464% ( 42) 00:14:24.818 13419.276 - 13481.691: 88.5353% ( 51) 00:14:24.818 13481.691 - 13544.107: 89.0146% ( 50) 00:14:24.818 13544.107 - 13606.522: 89.4076% ( 41) 00:14:24.818 13606.522 - 13668.937: 89.6664% ( 27) 00:14:24.818 13668.937 - 13731.352: 89.8965% ( 24) 00:14:24.818 13731.352 - 13793.768: 90.0786% ( 19) 00:14:24.818 13793.768 - 13856.183: 90.3087% ( 24) 00:14:24.818 13856.183 - 13918.598: 90.7496% ( 46) 00:14:24.818 13918.598 - 13981.013: 91.0276% ( 29) 00:14:24.818 13981.013 - 14043.429: 91.3248% ( 31) 00:14:24.818 14043.429 - 14105.844: 91.6315% ( 32) 00:14:24.818 14105.844 - 14168.259: 91.8808% ( 26) 00:14:24.818 14168.259 - 14230.674: 92.0821% ( 21) 00:14:24.818 14230.674 - 14293.090: 92.2163% ( 14) 00:14:24.818 14293.090 - 14355.505: 92.4176% ( 21) 00:14:24.818 14355.505 - 14417.920: 92.5613% ( 15) 00:14:24.818 14417.920 - 14480.335: 92.7147% ( 16) 00:14:24.818 14480.335 - 14542.750: 92.9735% ( 27) 00:14:24.818 14542.750 - 14605.166: 93.1461% ( 18) 00:14:24.818 14605.166 - 14667.581: 93.4049% ( 27) 00:14:24.818 14667.581 - 14729.996: 93.6158% ( 22) 00:14:24.818 14729.996 - 14792.411: 93.8554% ( 25) 00:14:24.818 14792.411 - 14854.827: 94.0759% ( 23) 00:14:24.818 14854.827 - 14917.242: 94.3347% ( 27) 00:14:24.818 14917.242 - 14979.657: 94.5169% ( 19) 00:14:24.818 14979.657 - 15042.072: 94.6702% ( 16) 00:14:24.818 15042.072 - 15104.488: 94.8620% ( 20) 00:14:24.818 15104.488 - 15166.903: 95.0729% ( 22) 00:14:24.818 15166.903 - 15229.318: 95.3317% ( 27) 00:14:24.818 15229.318 - 15291.733: 95.5521% ( 23) 00:14:24.818 15291.733 - 15354.149: 95.7918% ( 25) 00:14:24.818 15354.149 - 15416.564: 95.9739% ( 19) 00:14:24.818 15416.564 - 15478.979: 96.1369% ( 17) 00:14:24.818 15478.979 - 15541.394: 96.3286% ( 20) 00:14:24.818 15541.394 - 15603.810: 96.5203% ( 20) 00:14:24.818 15603.810 - 15666.225: 96.6162% ( 10) 00:14:24.818 15666.225 - 15728.640: 96.7983% ( 19) 00:14:24.818 15728.640 - 15791.055: 96.9709% ( 18) 00:14:24.818 15791.055 - 15853.470: 97.1434% ( 18) 00:14:24.818 15853.470 - 15915.886: 97.2872% ( 15) 00:14:24.818 15915.886 - 15978.301: 97.4502% ( 17) 00:14:24.818 15978.301 - 16103.131: 97.7761% ( 34) 00:14:24.818 16103.131 - 16227.962: 98.0445% ( 28) 00:14:24.818 16227.962 - 16352.792: 98.2458% ( 21) 00:14:24.818 16352.792 - 16477.623: 98.3704% ( 13) 00:14:24.818 16477.623 - 16602.453: 98.4375% ( 7) 00:14:24.818 16602.453 - 16727.284: 98.4950% ( 6) 00:14:24.818 16727.284 - 16852.114: 98.5621% ( 7) 00:14:24.818 16852.114 - 16976.945: 98.6292% ( 7) 00:14:24.818 16976.945 - 17101.775: 98.6676% ( 4) 00:14:24.818 17101.775 - 17226.606: 98.7059% ( 4) 00:14:24.818 17226.606 - 17351.436: 98.7538% ( 5) 00:14:24.818 17351.436 - 17476.267: 98.7730% ( 2) 00:14:24.818 30957.958 - 31082.789: 98.8113% ( 4) 00:14:24.818 31082.789 - 31207.619: 98.8593% ( 5) 00:14:24.818 31207.619 - 31332.450: 98.9168% ( 6) 00:14:24.818 31332.450 - 31457.280: 98.9264% ( 1) 00:14:24.818 31457.280 - 31582.110: 98.9743% ( 5) 00:14:24.818 31582.110 - 31706.941: 99.0127% ( 4) 00:14:24.818 31706.941 - 31831.771: 99.0414% ( 3) 00:14:24.818 31831.771 - 31956.602: 99.0798% ( 4) 00:14:24.818 31956.602 - 32206.263: 99.1469% ( 7) 00:14:24.818 32206.263 - 32455.924: 99.2331% ( 9) 00:14:24.818 32455.924 - 32705.585: 99.3098% ( 8) 00:14:24.818 32705.585 - 32955.246: 99.3769% ( 7) 00:14:24.818 32955.246 - 33204.907: 99.3865% ( 1) 00:14:24.818 38198.126 - 38447.787: 99.4440% ( 6) 00:14:24.818 38447.787 - 38697.448: 99.5207% ( 8) 00:14:24.818 38697.448 - 38947.109: 99.5878% ( 7) 00:14:24.818 38947.109 - 39196.770: 99.6741% ( 9) 00:14:24.819 39196.770 - 39446.430: 99.7508% ( 8) 00:14:24.819 39446.430 - 39696.091: 99.8179% ( 7) 00:14:24.819 39696.091 - 39945.752: 99.9041% ( 9) 00:14:24.819 39945.752 - 40195.413: 99.9712% ( 7) 00:14:24.819 40195.413 - 40445.074: 100.0000% ( 3) 00:14:24.819 00:14:24.819 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:24.819 ============================================================================== 00:14:24.819 Range in us Cumulative IO count 00:14:24.819 10236.099 - 10298.514: 0.0671% ( 7) 00:14:24.819 10298.514 - 10360.930: 0.3259% ( 27) 00:14:24.819 10360.930 - 10423.345: 0.6231% ( 31) 00:14:24.819 10423.345 - 10485.760: 0.9969% ( 39) 00:14:24.819 10485.760 - 10548.175: 1.6775% ( 71) 00:14:24.819 10548.175 - 10610.590: 2.5211% ( 88) 00:14:24.819 10610.590 - 10673.006: 3.6235% ( 115) 00:14:24.819 10673.006 - 10735.421: 4.6683% ( 109) 00:14:24.819 10735.421 - 10797.836: 5.8953% ( 128) 00:14:24.819 10797.836 - 10860.251: 7.3140% ( 148) 00:14:24.819 10860.251 - 10922.667: 8.8478% ( 160) 00:14:24.819 10922.667 - 10985.082: 10.5732% ( 180) 00:14:24.819 10985.082 - 11047.497: 12.4137% ( 192) 00:14:24.819 11047.497 - 11109.912: 14.4939% ( 217) 00:14:24.819 11109.912 - 11172.328: 16.8520% ( 246) 00:14:24.819 11172.328 - 11234.743: 19.4210% ( 268) 00:14:24.819 11234.743 - 11297.158: 22.0859% ( 278) 00:14:24.819 11297.158 - 11359.573: 24.7987% ( 283) 00:14:24.819 11359.573 - 11421.989: 27.8278% ( 316) 00:14:24.819 11421.989 - 11484.404: 30.8090% ( 311) 00:14:24.819 11484.404 - 11546.819: 33.8574% ( 318) 00:14:24.819 11546.819 - 11609.234: 36.8865% ( 316) 00:14:24.819 11609.234 - 11671.650: 40.0211% ( 327) 00:14:24.819 11671.650 - 11734.065: 43.2515% ( 337) 00:14:24.819 11734.065 - 11796.480: 46.6833% ( 358) 00:14:24.819 11796.480 - 11858.895: 50.0959% ( 356) 00:14:24.819 11858.895 - 11921.310: 53.6043% ( 366) 00:14:24.819 11921.310 - 11983.726: 56.9689% ( 351) 00:14:24.819 11983.726 - 12046.141: 60.1898% ( 336) 00:14:24.819 12046.141 - 12108.556: 63.0368% ( 297) 00:14:24.819 12108.556 - 12170.971: 65.5579% ( 263) 00:14:24.819 12170.971 - 12233.387: 67.8202% ( 236) 00:14:24.819 12233.387 - 12295.802: 70.0920% ( 237) 00:14:24.819 12295.802 - 12358.217: 72.1051% ( 210) 00:14:24.819 12358.217 - 12420.632: 73.8593% ( 183) 00:14:24.819 12420.632 - 12483.048: 75.5272% ( 174) 00:14:24.819 12483.048 - 12545.463: 77.2143% ( 176) 00:14:24.819 12545.463 - 12607.878: 78.7768% ( 163) 00:14:24.819 12607.878 - 12670.293: 80.2339% ( 152) 00:14:24.819 12670.293 - 12732.709: 81.2788% ( 109) 00:14:24.819 12732.709 - 12795.124: 82.1223% ( 88) 00:14:24.819 12795.124 - 12857.539: 82.9275% ( 84) 00:14:24.819 12857.539 - 12919.954: 83.6081% ( 71) 00:14:24.819 12919.954 - 12982.370: 84.2408% ( 66) 00:14:24.819 12982.370 - 13044.785: 84.7968% ( 58) 00:14:24.819 13044.785 - 13107.200: 85.4965% ( 73) 00:14:24.819 13107.200 - 13169.615: 86.0909% ( 62) 00:14:24.819 13169.615 - 13232.030: 86.6469% ( 58) 00:14:24.819 13232.030 - 13294.446: 87.0111% ( 38) 00:14:24.819 13294.446 - 13356.861: 87.4137% ( 42) 00:14:24.819 13356.861 - 13419.276: 87.8067% ( 41) 00:14:24.819 13419.276 - 13481.691: 88.3148% ( 53) 00:14:24.819 13481.691 - 13544.107: 88.6695% ( 37) 00:14:24.819 13544.107 - 13606.522: 88.9379% ( 28) 00:14:24.819 13606.522 - 13668.937: 89.1488% ( 22) 00:14:24.819 13668.937 - 13731.352: 89.3884% ( 25) 00:14:24.819 13731.352 - 13793.768: 89.6856% ( 31) 00:14:24.819 13793.768 - 13856.183: 89.9444% ( 27) 00:14:24.819 13856.183 - 13918.598: 90.1840% ( 25) 00:14:24.819 13918.598 - 13981.013: 90.5004% ( 33) 00:14:24.819 13981.013 - 14043.429: 90.9126% ( 43) 00:14:24.819 14043.429 - 14105.844: 91.2289% ( 33) 00:14:24.819 14105.844 - 14168.259: 91.5261% ( 31) 00:14:24.819 14168.259 - 14230.674: 91.7945% ( 28) 00:14:24.819 14230.674 - 14293.090: 92.0725% ( 29) 00:14:24.819 14293.090 - 14355.505: 92.2929% ( 23) 00:14:24.819 14355.505 - 14417.920: 92.5230% ( 24) 00:14:24.819 14417.920 - 14480.335: 92.8106% ( 30) 00:14:24.819 14480.335 - 14542.750: 93.0886% ( 29) 00:14:24.819 14542.750 - 14605.166: 93.2995% ( 22) 00:14:24.819 14605.166 - 14667.581: 93.4816% ( 19) 00:14:24.819 14667.581 - 14729.996: 93.7212% ( 25) 00:14:24.819 14729.996 - 14792.411: 93.9513% ( 24) 00:14:24.819 14792.411 - 14854.827: 94.2197% ( 28) 00:14:24.819 14854.827 - 14917.242: 94.4881% ( 28) 00:14:24.819 14917.242 - 14979.657: 94.8236% ( 35) 00:14:24.819 14979.657 - 15042.072: 95.0633% ( 25) 00:14:24.819 15042.072 - 15104.488: 95.2358% ( 18) 00:14:24.819 15104.488 - 15166.903: 95.4755% ( 25) 00:14:24.819 15166.903 - 15229.318: 95.6097% ( 14) 00:14:24.819 15229.318 - 15291.733: 95.7630% ( 16) 00:14:24.819 15291.733 - 15354.149: 95.8972% ( 14) 00:14:24.819 15354.149 - 15416.564: 96.0794% ( 19) 00:14:24.819 15416.564 - 15478.979: 96.2423% ( 17) 00:14:24.819 15478.979 - 15541.394: 96.4245% ( 19) 00:14:24.819 15541.394 - 15603.810: 96.6258% ( 21) 00:14:24.819 15603.810 - 15666.225: 96.7600% ( 14) 00:14:24.819 15666.225 - 15728.640: 96.9229% ( 17) 00:14:24.819 15728.640 - 15791.055: 97.0284% ( 11) 00:14:24.819 15791.055 - 15853.470: 97.1434% ( 12) 00:14:24.819 15853.470 - 15915.886: 97.2968% ( 16) 00:14:24.819 15915.886 - 15978.301: 97.4789% ( 19) 00:14:24.819 15978.301 - 16103.131: 97.7665% ( 30) 00:14:24.819 16103.131 - 16227.962: 97.8719% ( 11) 00:14:24.819 16227.962 - 16352.792: 97.9774% ( 11) 00:14:24.819 16352.792 - 16477.623: 98.0828% ( 11) 00:14:24.819 16477.623 - 16602.453: 98.1979% ( 12) 00:14:24.819 16602.453 - 16727.284: 98.2841% ( 9) 00:14:24.819 16727.284 - 16852.114: 98.3512% ( 7) 00:14:24.819 16852.114 - 16976.945: 98.4087% ( 6) 00:14:24.819 16976.945 - 17101.775: 98.4471% ( 4) 00:14:24.819 17101.775 - 17226.606: 98.5142% ( 7) 00:14:24.819 17226.606 - 17351.436: 98.6196% ( 11) 00:14:24.819 17351.436 - 17476.267: 98.6963% ( 8) 00:14:24.819 17476.267 - 17601.097: 98.7538% ( 6) 00:14:24.819 17601.097 - 17725.928: 98.7730% ( 2) 00:14:24.819 29085.501 - 29210.331: 98.7826% ( 1) 00:14:24.819 29210.331 - 29335.162: 98.8209% ( 4) 00:14:24.819 29335.162 - 29459.992: 98.8689% ( 5) 00:14:24.819 29459.992 - 29584.823: 98.9072% ( 4) 00:14:24.819 29584.823 - 29709.653: 98.9456% ( 4) 00:14:24.819 29709.653 - 29834.484: 98.9935% ( 5) 00:14:24.819 29834.484 - 29959.314: 99.0414% ( 5) 00:14:24.819 29959.314 - 30084.145: 99.0798% ( 4) 00:14:24.819 30084.145 - 30208.975: 99.1181% ( 4) 00:14:24.819 30208.975 - 30333.806: 99.1660% ( 5) 00:14:24.819 30333.806 - 30458.636: 99.2044% ( 4) 00:14:24.819 30458.636 - 30583.467: 99.2427% ( 4) 00:14:24.819 30583.467 - 30708.297: 99.2906% ( 5) 00:14:24.819 30708.297 - 30833.128: 99.3290% ( 4) 00:14:24.819 30833.128 - 30957.958: 99.3673% ( 4) 00:14:24.819 30957.958 - 31082.789: 99.3865% ( 2) 00:14:24.819 36200.838 - 36450.499: 99.4057% ( 2) 00:14:24.819 36450.499 - 36700.160: 99.4919% ( 9) 00:14:24.819 36700.160 - 36949.821: 99.5686% ( 8) 00:14:24.819 36949.821 - 37199.482: 99.6549% ( 9) 00:14:24.819 37199.482 - 37449.143: 99.7316% ( 8) 00:14:24.819 37449.143 - 37698.804: 99.8179% ( 9) 00:14:24.819 37698.804 - 37948.465: 99.9041% ( 9) 00:14:24.819 37948.465 - 38198.126: 99.9904% ( 9) 00:14:24.819 38198.126 - 38447.787: 100.0000% ( 1) 00:14:24.819 00:14:24.819 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:24.819 ============================================================================== 00:14:24.819 Range in us Cumulative IO count 00:14:24.819 10173.684 - 10236.099: 0.0575% ( 6) 00:14:24.819 10236.099 - 10298.514: 0.1534% ( 10) 00:14:24.819 10298.514 - 10360.930: 0.3355% ( 19) 00:14:24.819 10360.930 - 10423.345: 0.5081% ( 18) 00:14:24.819 10423.345 - 10485.760: 0.8723% ( 38) 00:14:24.819 10485.760 - 10548.175: 1.3900% ( 54) 00:14:24.819 10548.175 - 10610.590: 2.2048% ( 85) 00:14:24.819 10610.590 - 10673.006: 3.2496% ( 109) 00:14:24.819 10673.006 - 10735.421: 4.4287% ( 123) 00:14:24.819 10735.421 - 10797.836: 5.7899% ( 142) 00:14:24.819 10797.836 - 10860.251: 7.1894% ( 146) 00:14:24.819 10860.251 - 10922.667: 8.9340% ( 182) 00:14:24.819 10922.667 - 10985.082: 10.8033% ( 195) 00:14:24.819 10985.082 - 11047.497: 12.6725% ( 195) 00:14:24.819 11047.497 - 11109.912: 14.7239% ( 214) 00:14:24.819 11109.912 - 11172.328: 16.9862% ( 236) 00:14:24.819 11172.328 - 11234.743: 19.7182% ( 285) 00:14:24.819 11234.743 - 11297.158: 22.3255% ( 272) 00:14:24.819 11297.158 - 11359.573: 25.0575% ( 285) 00:14:24.819 11359.573 - 11421.989: 27.6745% ( 273) 00:14:24.819 11421.989 - 11484.404: 30.3585% ( 280) 00:14:24.819 11484.404 - 11546.819: 33.1768% ( 294) 00:14:24.819 11546.819 - 11609.234: 36.2442% ( 320) 00:14:24.819 11609.234 - 11671.650: 39.1200% ( 300) 00:14:24.819 11671.650 - 11734.065: 42.7243% ( 376) 00:14:24.819 11734.065 - 11796.480: 46.3574% ( 379) 00:14:24.819 11796.480 - 11858.895: 49.7891% ( 358) 00:14:24.819 11858.895 - 11921.310: 53.3263% ( 369) 00:14:24.819 11921.310 - 11983.726: 56.7581% ( 358) 00:14:24.819 11983.726 - 12046.141: 60.0268% ( 341) 00:14:24.819 12046.141 - 12108.556: 62.8834% ( 298) 00:14:24.819 12108.556 - 12170.971: 65.2320% ( 245) 00:14:24.819 12170.971 - 12233.387: 67.7435% ( 262) 00:14:24.819 12233.387 - 12295.802: 70.1495% ( 251) 00:14:24.819 12295.802 - 12358.217: 72.4118% ( 236) 00:14:24.819 12358.217 - 12420.632: 74.5111% ( 219) 00:14:24.819 12420.632 - 12483.048: 76.2366% ( 180) 00:14:24.819 12483.048 - 12545.463: 77.9045% ( 174) 00:14:24.819 12545.463 - 12607.878: 79.1123% ( 126) 00:14:24.820 12607.878 - 12670.293: 80.1476% ( 108) 00:14:24.820 12670.293 - 12732.709: 81.1829% ( 108) 00:14:24.820 12732.709 - 12795.124: 82.1319% ( 99) 00:14:24.820 12795.124 - 12857.539: 82.9275% ( 83) 00:14:24.820 12857.539 - 12919.954: 83.7615% ( 87) 00:14:24.820 12919.954 - 12982.370: 84.5475% ( 82) 00:14:24.820 12982.370 - 13044.785: 85.3240% ( 81) 00:14:24.820 13044.785 - 13107.200: 85.9663% ( 67) 00:14:24.820 13107.200 - 13169.615: 86.6469% ( 71) 00:14:24.820 13169.615 - 13232.030: 87.0878% ( 46) 00:14:24.820 13232.030 - 13294.446: 87.6054% ( 54) 00:14:24.820 13294.446 - 13356.861: 87.9985% ( 41) 00:14:24.820 13356.861 - 13419.276: 88.2573% ( 27) 00:14:24.820 13419.276 - 13481.691: 88.4778% ( 23) 00:14:24.820 13481.691 - 13544.107: 88.7078% ( 24) 00:14:24.820 13544.107 - 13606.522: 88.9379% ( 24) 00:14:24.820 13606.522 - 13668.937: 89.1871% ( 26) 00:14:24.820 13668.937 - 13731.352: 89.4939% ( 32) 00:14:24.820 13731.352 - 13793.768: 89.8102% ( 33) 00:14:24.820 13793.768 - 13856.183: 90.0690% ( 27) 00:14:24.820 13856.183 - 13918.598: 90.2032% ( 14) 00:14:24.820 13918.598 - 13981.013: 90.3374% ( 14) 00:14:24.820 13981.013 - 14043.429: 90.5387% ( 21) 00:14:24.820 14043.429 - 14105.844: 90.7592% ( 23) 00:14:24.820 14105.844 - 14168.259: 90.9988% ( 25) 00:14:24.820 14168.259 - 14230.674: 91.2481% ( 26) 00:14:24.820 14230.674 - 14293.090: 91.5357% ( 30) 00:14:24.820 14293.090 - 14355.505: 91.8232% ( 30) 00:14:24.820 14355.505 - 14417.920: 92.0916% ( 28) 00:14:24.820 14417.920 - 14480.335: 92.3792% ( 30) 00:14:24.820 14480.335 - 14542.750: 92.6476% ( 28) 00:14:24.820 14542.750 - 14605.166: 92.8873% ( 25) 00:14:24.820 14605.166 - 14667.581: 93.2803% ( 41) 00:14:24.820 14667.581 - 14729.996: 93.6158% ( 35) 00:14:24.820 14729.996 - 14792.411: 93.9321% ( 33) 00:14:24.820 14792.411 - 14854.827: 94.2772% ( 36) 00:14:24.820 14854.827 - 14917.242: 94.5169% ( 25) 00:14:24.820 14917.242 - 14979.657: 94.7373% ( 23) 00:14:24.820 14979.657 - 15042.072: 94.9962% ( 27) 00:14:24.820 15042.072 - 15104.488: 95.1879% ( 20) 00:14:24.820 15104.488 - 15166.903: 95.3988% ( 22) 00:14:24.820 15166.903 - 15229.318: 95.5905% ( 20) 00:14:24.820 15229.318 - 15291.733: 95.8014% ( 22) 00:14:24.820 15291.733 - 15354.149: 96.0219% ( 23) 00:14:24.820 15354.149 - 15416.564: 96.1944% ( 18) 00:14:24.820 15416.564 - 15478.979: 96.3574% ( 17) 00:14:24.820 15478.979 - 15541.394: 96.5012% ( 15) 00:14:24.820 15541.394 - 15603.810: 96.6449% ( 15) 00:14:24.820 15603.810 - 15666.225: 96.7887% ( 15) 00:14:24.820 15666.225 - 15728.640: 96.9038% ( 12) 00:14:24.820 15728.640 - 15791.055: 97.0667% ( 17) 00:14:24.820 15791.055 - 15853.470: 97.3255% ( 27) 00:14:24.820 15853.470 - 15915.886: 97.4981% ( 18) 00:14:24.820 15915.886 - 15978.301: 97.6419% ( 15) 00:14:24.820 15978.301 - 16103.131: 97.8815% ( 25) 00:14:24.820 16103.131 - 16227.962: 98.0157% ( 14) 00:14:24.820 16227.962 - 16352.792: 98.1499% ( 14) 00:14:24.820 16352.792 - 16477.623: 98.2937% ( 15) 00:14:24.820 16477.623 - 16602.453: 98.4183% ( 13) 00:14:24.820 16602.453 - 16727.284: 98.5142% ( 10) 00:14:24.820 16727.284 - 16852.114: 98.5525% ( 4) 00:14:24.820 16852.114 - 16976.945: 98.6005% ( 5) 00:14:24.820 16976.945 - 17101.775: 98.6196% ( 2) 00:14:24.820 17101.775 - 17226.606: 98.6484% ( 3) 00:14:24.820 17226.606 - 17351.436: 98.6771% ( 3) 00:14:24.820 17351.436 - 17476.267: 98.7059% ( 3) 00:14:24.820 17476.267 - 17601.097: 98.7442% ( 4) 00:14:24.820 17601.097 - 17725.928: 98.7730% ( 3) 00:14:24.820 27837.196 - 27962.027: 98.8018% ( 3) 00:14:24.820 27962.027 - 28086.857: 98.8497% ( 5) 00:14:24.820 28086.857 - 28211.688: 98.8880% ( 4) 00:14:24.820 28211.688 - 28336.518: 98.9264% ( 4) 00:14:24.820 28336.518 - 28461.349: 98.9743% ( 5) 00:14:24.820 28461.349 - 28586.179: 99.0127% ( 4) 00:14:24.820 28586.179 - 28711.010: 99.0510% ( 4) 00:14:24.820 28711.010 - 28835.840: 99.0893% ( 4) 00:14:24.820 28835.840 - 28960.670: 99.1277% ( 4) 00:14:24.820 28960.670 - 29085.501: 99.1756% ( 5) 00:14:24.820 29085.501 - 29210.331: 99.2140% ( 4) 00:14:24.820 29210.331 - 29335.162: 99.2619% ( 5) 00:14:24.820 29335.162 - 29459.992: 99.2906% ( 3) 00:14:24.820 29459.992 - 29584.823: 99.3290% ( 4) 00:14:24.820 29584.823 - 29709.653: 99.3673% ( 4) 00:14:24.820 29709.653 - 29834.484: 99.3865% ( 2) 00:14:24.820 35202.194 - 35451.855: 99.4248% ( 4) 00:14:24.820 35451.855 - 35701.516: 99.5111% ( 9) 00:14:24.820 35701.516 - 35951.177: 99.5878% ( 8) 00:14:24.820 35951.177 - 36200.838: 99.6645% ( 8) 00:14:24.820 36200.838 - 36450.499: 99.7412% ( 8) 00:14:24.820 36450.499 - 36700.160: 99.8275% ( 9) 00:14:24.820 36700.160 - 36949.821: 99.9041% ( 8) 00:14:24.820 36949.821 - 37199.482: 99.9904% ( 9) 00:14:24.820 37199.482 - 37449.143: 100.0000% ( 1) 00:14:24.820 00:14:24.820 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:24.820 ============================================================================== 00:14:24.820 Range in us Cumulative IO count 00:14:24.820 10173.684 - 10236.099: 0.0671% ( 7) 00:14:24.820 10236.099 - 10298.514: 0.1534% ( 9) 00:14:24.820 10298.514 - 10360.930: 0.2972% ( 15) 00:14:24.820 10360.930 - 10423.345: 0.6710% ( 39) 00:14:24.820 10423.345 - 10485.760: 1.1887% ( 54) 00:14:24.820 10485.760 - 10548.175: 1.7350% ( 57) 00:14:24.820 10548.175 - 10610.590: 2.6840% ( 99) 00:14:24.820 10610.590 - 10673.006: 3.7864% ( 115) 00:14:24.820 10673.006 - 10735.421: 4.8888% ( 115) 00:14:24.820 10735.421 - 10797.836: 6.0679% ( 123) 00:14:24.820 10797.836 - 10860.251: 7.3715% ( 136) 00:14:24.820 10860.251 - 10922.667: 8.5985% ( 128) 00:14:24.820 10922.667 - 10985.082: 10.1706% ( 164) 00:14:24.820 10985.082 - 11047.497: 11.7715% ( 167) 00:14:24.820 11047.497 - 11109.912: 13.8229% ( 214) 00:14:24.820 11109.912 - 11172.328: 16.0468% ( 232) 00:14:24.820 11172.328 - 11234.743: 18.7788% ( 285) 00:14:24.820 11234.743 - 11297.158: 21.6737% ( 302) 00:14:24.820 11297.158 - 11359.573: 24.5303% ( 298) 00:14:24.820 11359.573 - 11421.989: 27.3965% ( 299) 00:14:24.820 11421.989 - 11484.404: 30.3393% ( 307) 00:14:24.820 11484.404 - 11546.819: 33.3206% ( 311) 00:14:24.820 11546.819 - 11609.234: 36.3785% ( 319) 00:14:24.820 11609.234 - 11671.650: 39.6664% ( 343) 00:14:24.820 11671.650 - 11734.065: 42.9927% ( 347) 00:14:24.820 11734.065 - 11796.480: 46.3669% ( 352) 00:14:24.820 11796.480 - 11858.895: 49.6549% ( 343) 00:14:24.820 11858.895 - 11921.310: 52.9141% ( 340) 00:14:24.820 11921.310 - 11983.726: 56.1733% ( 340) 00:14:24.820 11983.726 - 12046.141: 59.6434% ( 362) 00:14:24.820 12046.141 - 12108.556: 62.8067% ( 330) 00:14:24.820 12108.556 - 12170.971: 65.6729% ( 299) 00:14:24.820 12170.971 - 12233.387: 68.3762% ( 282) 00:14:24.820 12233.387 - 12295.802: 70.8110% ( 254) 00:14:24.820 12295.802 - 12358.217: 73.0732% ( 236) 00:14:24.820 12358.217 - 12420.632: 75.2205% ( 224) 00:14:24.820 12420.632 - 12483.048: 76.9747% ( 183) 00:14:24.820 12483.048 - 12545.463: 78.5372% ( 163) 00:14:24.820 12545.463 - 12607.878: 79.7450% ( 126) 00:14:24.820 12607.878 - 12670.293: 80.8570% ( 116) 00:14:24.820 12670.293 - 12732.709: 81.9210% ( 111) 00:14:24.820 12732.709 - 12795.124: 82.7933% ( 91) 00:14:24.820 12795.124 - 12857.539: 83.3877% ( 62) 00:14:24.820 12857.539 - 12919.954: 83.9436% ( 58) 00:14:24.820 12919.954 - 12982.370: 84.5859% ( 67) 00:14:24.820 12982.370 - 13044.785: 85.3432% ( 79) 00:14:24.820 13044.785 - 13107.200: 86.0142% ( 70) 00:14:24.820 13107.200 - 13169.615: 86.5031% ( 51) 00:14:24.820 13169.615 - 13232.030: 87.0495% ( 57) 00:14:24.820 13232.030 - 13294.446: 87.5000% ( 47) 00:14:24.820 13294.446 - 13356.861: 87.8451% ( 36) 00:14:24.820 13356.861 - 13419.276: 88.1710% ( 34) 00:14:24.820 13419.276 - 13481.691: 88.4778% ( 32) 00:14:24.820 13481.691 - 13544.107: 88.7558% ( 29) 00:14:24.820 13544.107 - 13606.522: 89.0146% ( 27) 00:14:24.820 13606.522 - 13668.937: 89.2255% ( 22) 00:14:24.820 13668.937 - 13731.352: 89.3788% ( 16) 00:14:24.820 13731.352 - 13793.768: 89.5514% ( 18) 00:14:24.820 13793.768 - 13856.183: 89.7143% ( 17) 00:14:24.820 13856.183 - 13918.598: 89.9061% ( 20) 00:14:24.820 13918.598 - 13981.013: 90.1361% ( 24) 00:14:24.820 13981.013 - 14043.429: 90.2799% ( 15) 00:14:24.820 14043.429 - 14105.844: 90.4141% ( 14) 00:14:24.820 14105.844 - 14168.259: 90.5962% ( 19) 00:14:24.820 14168.259 - 14230.674: 90.8263% ( 24) 00:14:24.821 14230.674 - 14293.090: 91.1331% ( 32) 00:14:24.821 14293.090 - 14355.505: 91.4206% ( 30) 00:14:24.821 14355.505 - 14417.920: 91.7561% ( 35) 00:14:24.821 14417.920 - 14480.335: 92.1108% ( 37) 00:14:24.821 14480.335 - 14542.750: 92.4271% ( 33) 00:14:24.821 14542.750 - 14605.166: 92.7339% ( 32) 00:14:24.821 14605.166 - 14667.581: 93.0311% ( 31) 00:14:24.821 14667.581 - 14729.996: 93.4337% ( 42) 00:14:24.821 14729.996 - 14792.411: 93.7788% ( 36) 00:14:24.821 14792.411 - 14854.827: 94.0663% ( 30) 00:14:24.821 14854.827 - 14917.242: 94.3156% ( 26) 00:14:24.821 14917.242 - 14979.657: 94.5648% ( 26) 00:14:24.821 14979.657 - 15042.072: 94.8236% ( 27) 00:14:24.821 15042.072 - 15104.488: 95.0824% ( 27) 00:14:24.821 15104.488 - 15166.903: 95.3125% ( 24) 00:14:24.821 15166.903 - 15229.318: 95.5330% ( 23) 00:14:24.821 15229.318 - 15291.733: 95.7535% ( 23) 00:14:24.821 15291.733 - 15354.149: 95.9260% ( 18) 00:14:24.821 15354.149 - 15416.564: 96.0890% ( 17) 00:14:24.821 15416.564 - 15478.979: 96.2519% ( 17) 00:14:24.821 15478.979 - 15541.394: 96.4436% ( 20) 00:14:24.821 15541.394 - 15603.810: 96.6641% ( 23) 00:14:24.821 15603.810 - 15666.225: 96.8558% ( 20) 00:14:24.821 15666.225 - 15728.640: 97.0188% ( 17) 00:14:24.821 15728.640 - 15791.055: 97.1626% ( 15) 00:14:24.821 15791.055 - 15853.470: 97.3064% ( 15) 00:14:24.821 15853.470 - 15915.886: 97.4406% ( 14) 00:14:24.821 15915.886 - 15978.301: 97.6323% ( 20) 00:14:24.821 15978.301 - 16103.131: 97.7761% ( 15) 00:14:24.821 16103.131 - 16227.962: 97.8911% ( 12) 00:14:24.821 16227.962 - 16352.792: 97.9678% ( 8) 00:14:24.821 16352.792 - 16477.623: 98.0541% ( 9) 00:14:24.821 16477.623 - 16602.453: 98.1116% ( 6) 00:14:24.821 16602.453 - 16727.284: 98.2266% ( 12) 00:14:24.821 16727.284 - 16852.114: 98.2650% ( 4) 00:14:24.821 16852.114 - 16976.945: 98.3321% ( 7) 00:14:24.821 16976.945 - 17101.775: 98.3800% ( 5) 00:14:24.821 17101.775 - 17226.606: 98.4183% ( 4) 00:14:24.821 17226.606 - 17351.436: 98.4663% ( 5) 00:14:24.821 17351.436 - 17476.267: 98.5238% ( 6) 00:14:24.821 17476.267 - 17601.097: 98.5717% ( 5) 00:14:24.821 17601.097 - 17725.928: 98.6676% ( 10) 00:14:24.821 17725.928 - 17850.758: 98.7059% ( 4) 00:14:24.821 17850.758 - 17975.589: 98.7347% ( 3) 00:14:24.821 17975.589 - 18100.419: 98.7634% ( 3) 00:14:24.821 18100.419 - 18225.250: 98.7730% ( 1) 00:14:24.821 25839.909 - 25964.739: 98.7826% ( 1) 00:14:24.821 25964.739 - 26089.570: 98.8209% ( 4) 00:14:24.821 26089.570 - 26214.400: 98.8593% ( 4) 00:14:24.821 26214.400 - 26339.230: 98.8976% ( 4) 00:14:24.821 26339.230 - 26464.061: 98.9456% ( 5) 00:14:24.821 26464.061 - 26588.891: 98.9743% ( 3) 00:14:24.821 26588.891 - 26713.722: 99.0222% ( 5) 00:14:24.821 26713.722 - 26838.552: 99.0606% ( 4) 00:14:24.821 26838.552 - 26963.383: 99.1085% ( 5) 00:14:24.821 26963.383 - 27088.213: 99.1469% ( 4) 00:14:24.821 27088.213 - 27213.044: 99.1852% ( 4) 00:14:24.821 27213.044 - 27337.874: 99.2331% ( 5) 00:14:24.821 27337.874 - 27462.705: 99.2715% ( 4) 00:14:24.821 27462.705 - 27587.535: 99.3098% ( 4) 00:14:24.821 27587.535 - 27712.366: 99.3482% ( 4) 00:14:24.821 27712.366 - 27837.196: 99.3865% ( 4) 00:14:24.821 33454.568 - 33704.229: 99.3961% ( 1) 00:14:24.821 33704.229 - 33953.890: 99.4728% ( 8) 00:14:24.821 33953.890 - 34203.550: 99.5495% ( 8) 00:14:24.821 34203.550 - 34453.211: 99.6262% ( 8) 00:14:24.821 34453.211 - 34702.872: 99.6837% ( 6) 00:14:24.821 34702.872 - 34952.533: 99.7604% ( 8) 00:14:24.821 34952.533 - 35202.194: 99.8275% ( 7) 00:14:24.821 35202.194 - 35451.855: 99.9041% ( 8) 00:14:24.821 35451.855 - 35701.516: 99.9808% ( 8) 00:14:24.821 35701.516 - 35951.177: 100.0000% ( 2) 00:14:24.821 00:14:24.821 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:24.821 ============================================================================== 00:14:24.821 Range in us Cumulative IO count 00:14:24.821 10111.269 - 10173.684: 0.0096% ( 1) 00:14:24.821 10173.684 - 10236.099: 0.0383% ( 3) 00:14:24.821 10236.099 - 10298.514: 0.1150% ( 8) 00:14:24.821 10298.514 - 10360.930: 0.2780% ( 17) 00:14:24.821 10360.930 - 10423.345: 0.5560% ( 29) 00:14:24.821 10423.345 - 10485.760: 0.9873% ( 45) 00:14:24.821 10485.760 - 10548.175: 1.6584% ( 70) 00:14:24.821 10548.175 - 10610.590: 2.4540% ( 83) 00:14:24.821 10610.590 - 10673.006: 3.3455% ( 93) 00:14:24.821 10673.006 - 10735.421: 4.3137% ( 101) 00:14:24.821 10735.421 - 10797.836: 5.5502% ( 129) 00:14:24.821 10797.836 - 10860.251: 7.0648% ( 158) 00:14:24.821 10860.251 - 10922.667: 8.7232% ( 173) 00:14:24.821 10922.667 - 10985.082: 10.3336% ( 168) 00:14:24.821 10985.082 - 11047.497: 12.1453% ( 189) 00:14:24.821 11047.497 - 11109.912: 14.2063% ( 215) 00:14:24.821 11109.912 - 11172.328: 16.3727% ( 226) 00:14:24.821 11172.328 - 11234.743: 19.0088% ( 275) 00:14:24.821 11234.743 - 11297.158: 21.7216% ( 283) 00:14:24.821 11297.158 - 11359.573: 24.4153% ( 281) 00:14:24.821 11359.573 - 11421.989: 27.1472% ( 285) 00:14:24.821 11421.989 - 11484.404: 30.1285% ( 311) 00:14:24.821 11484.404 - 11546.819: 33.1001% ( 310) 00:14:24.821 11546.819 - 11609.234: 36.1388% ( 317) 00:14:24.821 11609.234 - 11671.650: 39.3501% ( 335) 00:14:24.821 11671.650 - 11734.065: 42.8393% ( 364) 00:14:24.821 11734.065 - 11796.480: 46.3574% ( 367) 00:14:24.821 11796.480 - 11858.895: 49.7699% ( 356) 00:14:24.821 11858.895 - 11921.310: 52.8758% ( 324) 00:14:24.821 11921.310 - 11983.726: 56.0104% ( 327) 00:14:24.821 11983.726 - 12046.141: 59.3750% ( 351) 00:14:24.821 12046.141 - 12108.556: 62.0399% ( 278) 00:14:24.821 12108.556 - 12170.971: 64.8102% ( 289) 00:14:24.821 12170.971 - 12233.387: 67.5422% ( 285) 00:14:24.821 12233.387 - 12295.802: 70.1591% ( 273) 00:14:24.821 12295.802 - 12358.217: 72.5939% ( 254) 00:14:24.821 12358.217 - 12420.632: 74.9712% ( 248) 00:14:24.821 12420.632 - 12483.048: 77.0418% ( 216) 00:14:24.821 12483.048 - 12545.463: 78.7385% ( 177) 00:14:24.821 12545.463 - 12607.878: 80.1093% ( 143) 00:14:24.821 12607.878 - 12670.293: 81.1254% ( 106) 00:14:24.821 12670.293 - 12732.709: 82.0073% ( 92) 00:14:24.821 12732.709 - 12795.124: 82.9659% ( 100) 00:14:24.821 12795.124 - 12857.539: 83.7998% ( 87) 00:14:24.821 12857.539 - 12919.954: 84.4996% ( 73) 00:14:24.821 12919.954 - 12982.370: 85.0460% ( 57) 00:14:24.821 12982.370 - 13044.785: 85.4774% ( 45) 00:14:24.821 13044.785 - 13107.200: 86.0142% ( 56) 00:14:24.821 13107.200 - 13169.615: 86.7044% ( 72) 00:14:24.821 13169.615 - 13232.030: 87.1262% ( 44) 00:14:24.821 13232.030 - 13294.446: 87.4521% ( 34) 00:14:24.821 13294.446 - 13356.861: 87.7972% ( 36) 00:14:24.821 13356.861 - 13419.276: 88.1614% ( 38) 00:14:24.821 13419.276 - 13481.691: 88.4011% ( 25) 00:14:24.821 13481.691 - 13544.107: 88.6120% ( 22) 00:14:24.821 13544.107 - 13606.522: 88.8420% ( 24) 00:14:24.821 13606.522 - 13668.937: 89.0146% ( 18) 00:14:24.821 13668.937 - 13731.352: 89.2350% ( 23) 00:14:24.821 13731.352 - 13793.768: 89.3884% ( 16) 00:14:24.821 13793.768 - 13856.183: 89.5610% ( 18) 00:14:24.821 13856.183 - 13918.598: 89.6952% ( 14) 00:14:24.821 13918.598 - 13981.013: 89.9061% ( 22) 00:14:24.821 13981.013 - 14043.429: 90.2128% ( 32) 00:14:24.821 14043.429 - 14105.844: 90.5196% ( 32) 00:14:24.821 14105.844 - 14168.259: 90.7400% ( 23) 00:14:24.821 14168.259 - 14230.674: 90.9605% ( 23) 00:14:24.821 14230.674 - 14293.090: 91.2673% ( 32) 00:14:24.821 14293.090 - 14355.505: 91.6028% ( 35) 00:14:24.821 14355.505 - 14417.920: 91.8808% ( 29) 00:14:24.821 14417.920 - 14480.335: 92.1492% ( 28) 00:14:24.821 14480.335 - 14542.750: 92.3696% ( 23) 00:14:24.821 14542.750 - 14605.166: 92.6093% ( 25) 00:14:24.821 14605.166 - 14667.581: 92.8106% ( 21) 00:14:24.821 14667.581 - 14729.996: 93.0023% ( 20) 00:14:24.821 14729.996 - 14792.411: 93.1748% ( 18) 00:14:24.821 14792.411 - 14854.827: 93.4145% ( 25) 00:14:24.822 14854.827 - 14917.242: 93.6062% ( 20) 00:14:24.822 14917.242 - 14979.657: 93.8938% ( 30) 00:14:24.822 14979.657 - 15042.072: 94.2101% ( 33) 00:14:24.822 15042.072 - 15104.488: 94.5265% ( 33) 00:14:24.822 15104.488 - 15166.903: 94.8428% ( 33) 00:14:24.822 15166.903 - 15229.318: 95.1400% ( 31) 00:14:24.822 15229.318 - 15291.733: 95.3700% ( 24) 00:14:24.822 15291.733 - 15354.149: 95.6192% ( 26) 00:14:24.822 15354.149 - 15416.564: 95.8589% ( 25) 00:14:24.822 15416.564 - 15478.979: 96.1177% ( 27) 00:14:24.822 15478.979 - 15541.394: 96.3094% ( 20) 00:14:24.822 15541.394 - 15603.810: 96.5683% ( 27) 00:14:24.822 15603.810 - 15666.225: 96.7887% ( 23) 00:14:24.822 15666.225 - 15728.640: 97.0188% ( 24) 00:14:24.822 15728.640 - 15791.055: 97.2105% ( 20) 00:14:24.822 15791.055 - 15853.470: 97.3926% ( 19) 00:14:24.822 15853.470 - 15915.886: 97.5364% ( 15) 00:14:24.822 15915.886 - 15978.301: 97.6323% ( 10) 00:14:24.822 15978.301 - 16103.131: 97.7952% ( 17) 00:14:24.822 16103.131 - 16227.962: 97.9294% ( 14) 00:14:24.822 16227.962 - 16352.792: 98.0349% ( 11) 00:14:24.822 16352.792 - 16477.623: 98.0732% ( 4) 00:14:24.822 16477.623 - 16602.453: 98.1116% ( 4) 00:14:24.822 16602.453 - 16727.284: 98.1787% ( 7) 00:14:24.822 16727.284 - 16852.114: 98.2266% ( 5) 00:14:24.822 16852.114 - 16976.945: 98.2745% ( 5) 00:14:24.822 16976.945 - 17101.775: 98.3225% ( 5) 00:14:24.822 17101.775 - 17226.606: 98.3704% ( 5) 00:14:24.822 17226.606 - 17351.436: 98.4087% ( 4) 00:14:24.822 17351.436 - 17476.267: 98.4567% ( 5) 00:14:24.822 17476.267 - 17601.097: 98.5046% ( 5) 00:14:24.822 17601.097 - 17725.928: 98.5429% ( 4) 00:14:24.822 17725.928 - 17850.758: 98.6005% ( 6) 00:14:24.822 17850.758 - 17975.589: 98.6771% ( 8) 00:14:24.822 17975.589 - 18100.419: 98.6963% ( 2) 00:14:24.822 18100.419 - 18225.250: 98.7251% ( 3) 00:14:24.822 18225.250 - 18350.080: 98.7538% ( 3) 00:14:24.822 18350.080 - 18474.910: 98.7730% ( 2) 00:14:24.822 23967.451 - 24092.282: 98.8018% ( 3) 00:14:24.822 24092.282 - 24217.112: 98.8497% ( 5) 00:14:24.822 24217.112 - 24341.943: 98.8880% ( 4) 00:14:24.822 24341.943 - 24466.773: 98.9360% ( 5) 00:14:24.822 24466.773 - 24591.604: 98.9743% ( 4) 00:14:24.822 24591.604 - 24716.434: 99.0222% ( 5) 00:14:24.822 24716.434 - 24841.265: 99.0606% ( 4) 00:14:24.822 24841.265 - 24966.095: 99.0989% ( 4) 00:14:24.822 24966.095 - 25090.926: 99.1373% ( 4) 00:14:24.822 25090.926 - 25215.756: 99.1852% ( 5) 00:14:24.822 25215.756 - 25340.587: 99.2235% ( 4) 00:14:24.822 25340.587 - 25465.417: 99.2715% ( 5) 00:14:24.822 25465.417 - 25590.248: 99.3098% ( 4) 00:14:24.822 25590.248 - 25715.078: 99.3482% ( 4) 00:14:24.822 25715.078 - 25839.909: 99.3865% ( 4) 00:14:24.822 31831.771 - 31956.602: 99.4057% ( 2) 00:14:24.822 31956.602 - 32206.263: 99.4919% ( 9) 00:14:24.822 32206.263 - 32455.924: 99.5495% ( 6) 00:14:24.822 32455.924 - 32705.585: 99.6262% ( 8) 00:14:24.822 32705.585 - 32955.246: 99.6933% ( 7) 00:14:24.822 32955.246 - 33204.907: 99.7604% ( 7) 00:14:24.822 33204.907 - 33454.568: 99.8275% ( 7) 00:14:24.822 33454.568 - 33704.229: 99.9041% ( 8) 00:14:24.822 33704.229 - 33953.890: 99.9712% ( 7) 00:14:24.822 33953.890 - 34203.550: 100.0000% ( 3) 00:14:24.822 00:14:24.822 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:24.822 ============================================================================== 00:14:24.822 Range in us Cumulative IO count 00:14:24.822 10048.853 - 10111.269: 0.0095% ( 1) 00:14:24.822 10173.684 - 10236.099: 0.0191% ( 1) 00:14:24.822 10298.514 - 10360.930: 0.1239% ( 11) 00:14:24.822 10360.930 - 10423.345: 0.3239% ( 21) 00:14:24.822 10423.345 - 10485.760: 0.7431% ( 44) 00:14:24.822 10485.760 - 10548.175: 1.4577% ( 75) 00:14:24.822 10548.175 - 10610.590: 2.2866% ( 87) 00:14:24.822 10610.590 - 10673.006: 3.1822% ( 94) 00:14:24.822 10673.006 - 10735.421: 4.3826% ( 126) 00:14:24.822 10735.421 - 10797.836: 5.5545% ( 123) 00:14:24.822 10797.836 - 10860.251: 6.9360% ( 145) 00:14:24.822 10860.251 - 10922.667: 8.4127% ( 155) 00:14:24.822 10922.667 - 10985.082: 10.0229% ( 169) 00:14:24.822 10985.082 - 11047.497: 11.7283% ( 179) 00:14:24.822 11047.497 - 11109.912: 13.9291% ( 231) 00:14:24.822 11109.912 - 11172.328: 16.3967% ( 259) 00:14:24.822 11172.328 - 11234.743: 19.1311% ( 287) 00:14:24.822 11234.743 - 11297.158: 21.9989% ( 301) 00:14:24.822 11297.158 - 11359.573: 25.0286% ( 318) 00:14:24.822 11359.573 - 11421.989: 27.8868% ( 300) 00:14:24.822 11421.989 - 11484.404: 30.5354% ( 278) 00:14:24.822 11484.404 - 11546.819: 33.4604% ( 307) 00:14:24.822 11546.819 - 11609.234: 36.3948% ( 308) 00:14:24.822 11609.234 - 11671.650: 39.3007% ( 305) 00:14:24.822 11671.650 - 11734.065: 42.2066% ( 305) 00:14:24.822 11734.065 - 11796.480: 45.6841% ( 365) 00:14:24.822 11796.480 - 11858.895: 49.1044% ( 359) 00:14:24.822 11858.895 - 11921.310: 52.5343% ( 360) 00:14:24.822 11921.310 - 11983.726: 55.6498% ( 327) 00:14:24.822 11983.726 - 12046.141: 58.9177% ( 343) 00:14:24.822 12046.141 - 12108.556: 61.8807% ( 311) 00:14:24.822 12108.556 - 12170.971: 64.7961% ( 306) 00:14:24.822 12170.971 - 12233.387: 67.5591% ( 290) 00:14:24.822 12233.387 - 12295.802: 69.9409% ( 250) 00:14:24.822 12295.802 - 12358.217: 71.8655% ( 202) 00:14:24.822 12358.217 - 12420.632: 73.8948% ( 213) 00:14:24.822 12420.632 - 12483.048: 75.7431% ( 194) 00:14:24.822 12483.048 - 12545.463: 77.3819% ( 172) 00:14:24.822 12545.463 - 12607.878: 78.8586% ( 155) 00:14:24.822 12607.878 - 12670.293: 80.3068% ( 152) 00:14:24.822 12670.293 - 12732.709: 81.5739% ( 133) 00:14:24.822 12732.709 - 12795.124: 82.6124% ( 109) 00:14:24.822 12795.124 - 12857.539: 83.5080% ( 94) 00:14:24.822 12857.539 - 12919.954: 84.2893% ( 82) 00:14:24.822 12919.954 - 12982.370: 84.9562% ( 70) 00:14:24.822 12982.370 - 13044.785: 85.5373% ( 61) 00:14:24.822 13044.785 - 13107.200: 86.0804% ( 57) 00:14:24.822 13107.200 - 13169.615: 86.5187% ( 46) 00:14:24.822 13169.615 - 13232.030: 87.0046% ( 51) 00:14:24.822 13232.030 - 13294.446: 87.5667% ( 59) 00:14:24.822 13294.446 - 13356.861: 87.9383% ( 39) 00:14:24.822 13356.861 - 13419.276: 88.3575% ( 44) 00:14:24.822 13419.276 - 13481.691: 88.7481% ( 41) 00:14:24.822 13481.691 - 13544.107: 89.0625% ( 33) 00:14:24.822 13544.107 - 13606.522: 89.2721% ( 22) 00:14:24.822 13606.522 - 13668.937: 89.4436% ( 18) 00:14:24.822 13668.937 - 13731.352: 89.6627% ( 23) 00:14:24.822 13731.352 - 13793.768: 89.8247% ( 17) 00:14:24.822 13793.768 - 13856.183: 90.0057% ( 19) 00:14:24.822 13856.183 - 13918.598: 90.2344% ( 24) 00:14:24.822 13918.598 - 13981.013: 90.3868% ( 16) 00:14:24.822 13981.013 - 14043.429: 90.5488% ( 17) 00:14:24.822 14043.429 - 14105.844: 90.7489% ( 21) 00:14:24.822 14105.844 - 14168.259: 90.9489% ( 21) 00:14:24.822 14168.259 - 14230.674: 91.2824% ( 35) 00:14:24.822 14230.674 - 14293.090: 91.5587% ( 29) 00:14:24.822 14293.090 - 14355.505: 91.7969% ( 25) 00:14:24.822 14355.505 - 14417.920: 92.0446% ( 26) 00:14:24.822 14417.920 - 14480.335: 92.2542% ( 22) 00:14:24.822 14480.335 - 14542.750: 92.4162% ( 17) 00:14:24.822 14542.750 - 14605.166: 92.5972% ( 19) 00:14:24.822 14605.166 - 14667.581: 92.7687% ( 18) 00:14:24.822 14667.581 - 14729.996: 92.9402% ( 18) 00:14:24.822 14729.996 - 14792.411: 93.1307% ( 20) 00:14:24.822 14792.411 - 14854.827: 93.2927% ( 17) 00:14:24.822 14854.827 - 14917.242: 93.4832% ( 20) 00:14:24.822 14917.242 - 14979.657: 93.7691% ( 30) 00:14:24.822 14979.657 - 15042.072: 94.1025% ( 35) 00:14:24.822 15042.072 - 15104.488: 94.4455% ( 36) 00:14:24.822 15104.488 - 15166.903: 94.8075% ( 38) 00:14:24.822 15166.903 - 15229.318: 95.1886% ( 40) 00:14:24.822 15229.318 - 15291.733: 95.5126% ( 34) 00:14:24.822 15291.733 - 15354.149: 95.7793% ( 28) 00:14:24.822 15354.149 - 15416.564: 96.0271% ( 26) 00:14:24.822 15416.564 - 15478.979: 96.2557% ( 24) 00:14:24.822 15478.979 - 15541.394: 96.4367% ( 19) 00:14:24.822 15541.394 - 15603.810: 96.5987% ( 17) 00:14:24.822 15603.810 - 15666.225: 96.7702% ( 18) 00:14:24.822 15666.225 - 15728.640: 96.9226% ( 16) 00:14:24.822 15728.640 - 15791.055: 97.0560% ( 14) 00:14:24.823 15791.055 - 15853.470: 97.2180% ( 17) 00:14:24.823 15853.470 - 15915.886: 97.3609% ( 15) 00:14:24.823 15915.886 - 15978.301: 97.5038% ( 15) 00:14:24.823 15978.301 - 16103.131: 97.7896% ( 30) 00:14:24.823 16103.131 - 16227.962: 97.9421% ( 16) 00:14:24.823 16227.962 - 16352.792: 98.0850% ( 15) 00:14:24.823 16352.792 - 16477.623: 98.1707% ( 9) 00:14:24.823 16727.284 - 16852.114: 98.1898% ( 2) 00:14:24.823 16852.114 - 16976.945: 98.2374% ( 5) 00:14:24.823 16976.945 - 17101.775: 98.2755% ( 4) 00:14:24.823 17101.775 - 17226.606: 98.3422% ( 7) 00:14:24.823 17226.606 - 17351.436: 98.4280% ( 9) 00:14:24.823 17351.436 - 17476.267: 98.5328% ( 11) 00:14:24.823 17476.267 - 17601.097: 98.6090% ( 8) 00:14:24.823 17601.097 - 17725.928: 98.6947% ( 9) 00:14:24.823 17725.928 - 17850.758: 98.7900% ( 10) 00:14:24.823 17850.758 - 17975.589: 98.8567% ( 7) 00:14:24.823 17975.589 - 18100.419: 98.9615% ( 11) 00:14:24.823 18100.419 - 18225.250: 99.0949% ( 14) 00:14:24.823 18225.250 - 18350.080: 99.2569% ( 17) 00:14:24.823 18350.080 - 18474.910: 99.3236% ( 7) 00:14:24.823 18474.910 - 18599.741: 99.3807% ( 6) 00:14:24.823 18599.741 - 18724.571: 99.3902% ( 1) 00:14:24.823 23468.130 - 23592.960: 99.3998% ( 1) 00:14:24.823 23592.960 - 23717.790: 99.4379% ( 4) 00:14:24.823 23717.790 - 23842.621: 99.4760% ( 4) 00:14:24.823 23842.621 - 23967.451: 99.5236% ( 5) 00:14:24.823 23967.451 - 24092.282: 99.5617% ( 4) 00:14:24.823 24092.282 - 24217.112: 99.6094% ( 5) 00:14:24.823 24217.112 - 24341.943: 99.6475% ( 4) 00:14:24.823 24341.943 - 24466.773: 99.6856% ( 4) 00:14:24.823 24466.773 - 24591.604: 99.7237% ( 4) 00:14:24.823 24591.604 - 24716.434: 99.7618% ( 4) 00:14:24.823 24716.434 - 24841.265: 99.8095% ( 5) 00:14:24.823 24841.265 - 24966.095: 99.8476% ( 4) 00:14:24.823 24966.095 - 25090.926: 99.8857% ( 4) 00:14:24.823 25090.926 - 25215.756: 99.9238% ( 4) 00:14:24.823 25215.756 - 25340.587: 99.9619% ( 4) 00:14:24.823 25340.587 - 25465.417: 100.0000% ( 4) 00:14:24.823 00:14:25.083 15:26:10 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:25.083 00:14:25.083 real 0m2.861s 00:14:25.083 user 0m2.352s 00:14:25.083 sys 0m0.391s 00:14:25.083 15:26:10 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.083 15:26:10 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:25.083 ************************************ 00:14:25.083 END TEST nvme_perf 00:14:25.083 ************************************ 00:14:25.083 15:26:10 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:25.083 15:26:10 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:25.083 15:26:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.083 15:26:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.083 ************************************ 00:14:25.083 START TEST nvme_hello_world 00:14:25.083 ************************************ 00:14:25.083 15:26:10 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:25.342 Initializing NVMe Controllers 00:14:25.342 Attached to 0000:00:10.0 00:14:25.343 Namespace ID: 1 size: 6GB 00:14:25.343 Attached to 0000:00:11.0 00:14:25.343 Namespace ID: 1 size: 5GB 00:14:25.343 Attached to 0000:00:13.0 00:14:25.343 Namespace ID: 1 size: 1GB 00:14:25.343 Attached to 0000:00:12.0 00:14:25.343 Namespace ID: 1 size: 4GB 00:14:25.343 Namespace ID: 2 size: 4GB 00:14:25.343 Namespace ID: 3 size: 4GB 00:14:25.343 Initialization complete. 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 INFO: using host memory buffer for IO 00:14:25.343 Hello world! 00:14:25.343 ************************************ 00:14:25.343 END TEST nvme_hello_world 00:14:25.343 ************************************ 00:14:25.343 00:14:25.343 real 0m0.400s 00:14:25.343 user 0m0.161s 00:14:25.343 sys 0m0.193s 00:14:25.343 15:26:11 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.343 15:26:11 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:25.602 15:26:11 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:25.602 15:26:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.602 15:26:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.602 15:26:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.602 ************************************ 00:14:25.602 START TEST nvme_sgl 00:14:25.602 ************************************ 00:14:25.602 15:26:11 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:25.862 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:25.862 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:25.862 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:25.862 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:25.862 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:25.862 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:25.862 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:25.862 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:25.862 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:25.862 NVMe Readv/Writev Request test 00:14:25.862 Attached to 0000:00:10.0 00:14:25.862 Attached to 0000:00:11.0 00:14:25.862 Attached to 0000:00:13.0 00:14:25.862 Attached to 0000:00:12.0 00:14:25.862 0000:00:10.0: build_io_request_2 test passed 00:14:25.862 0000:00:10.0: build_io_request_4 test passed 00:14:25.862 0000:00:10.0: build_io_request_5 test passed 00:14:25.862 0000:00:10.0: build_io_request_6 test passed 00:14:25.862 0000:00:10.0: build_io_request_7 test passed 00:14:25.862 0000:00:10.0: build_io_request_10 test passed 00:14:25.862 0000:00:11.0: build_io_request_2 test passed 00:14:25.862 0000:00:11.0: build_io_request_4 test passed 00:14:25.862 0000:00:11.0: build_io_request_5 test passed 00:14:25.862 0000:00:11.0: build_io_request_6 test passed 00:14:25.862 0000:00:11.0: build_io_request_7 test passed 00:14:25.862 0000:00:11.0: build_io_request_10 test passed 00:14:25.862 Cleaning up... 00:14:26.121 00:14:26.121 real 0m0.494s 00:14:26.121 user 0m0.245s 00:14:26.121 sys 0m0.202s 00:14:26.121 15:26:11 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.121 15:26:11 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:26.121 ************************************ 00:14:26.121 END TEST nvme_sgl 00:14:26.121 ************************************ 00:14:26.121 15:26:11 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:26.121 15:26:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.121 15:26:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.121 15:26:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.121 ************************************ 00:14:26.121 START TEST nvme_e2edp 00:14:26.121 ************************************ 00:14:26.121 15:26:11 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:26.379 NVMe Write/Read with End-to-End data protection test 00:14:26.379 Attached to 0000:00:10.0 00:14:26.379 Attached to 0000:00:11.0 00:14:26.379 Attached to 0000:00:13.0 00:14:26.379 Attached to 0000:00:12.0 00:14:26.379 Cleaning up... 00:14:26.379 00:14:26.379 real 0m0.386s 00:14:26.379 user 0m0.133s 00:14:26.379 sys 0m0.201s 00:14:26.379 15:26:12 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.379 ************************************ 00:14:26.379 END TEST nvme_e2edp 00:14:26.379 ************************************ 00:14:26.379 15:26:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:26.379 15:26:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:26.379 15:26:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.379 15:26:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.379 15:26:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.379 ************************************ 00:14:26.379 START TEST nvme_reserve 00:14:26.379 ************************************ 00:14:26.379 15:26:12 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:26.946 ===================================================== 00:14:26.946 NVMe Controller at PCI bus 0, device 16, function 0 00:14:26.946 ===================================================== 00:14:26.946 Reservations: Not Supported 00:14:26.946 ===================================================== 00:14:26.946 NVMe Controller at PCI bus 0, device 17, function 0 00:14:26.946 ===================================================== 00:14:26.946 Reservations: Not Supported 00:14:26.946 ===================================================== 00:14:26.946 NVMe Controller at PCI bus 0, device 19, function 0 00:14:26.946 ===================================================== 00:14:26.946 Reservations: Not Supported 00:14:26.946 ===================================================== 00:14:26.946 NVMe Controller at PCI bus 0, device 18, function 0 00:14:26.946 ===================================================== 00:14:26.946 Reservations: Not Supported 00:14:26.946 Reservation test passed 00:14:26.946 ************************************ 00:14:26.946 END TEST nvme_reserve 00:14:26.946 ************************************ 00:14:26.946 00:14:26.946 real 0m0.361s 00:14:26.946 user 0m0.132s 00:14:26.946 sys 0m0.178s 00:14:26.946 15:26:12 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.946 15:26:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:26.946 15:26:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:26.946 15:26:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.946 15:26:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.946 15:26:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.946 ************************************ 00:14:26.946 START TEST nvme_err_injection 00:14:26.946 ************************************ 00:14:26.946 15:26:12 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:27.205 NVMe Error Injection test 00:14:27.205 Attached to 0000:00:10.0 00:14:27.205 Attached to 0000:00:11.0 00:14:27.205 Attached to 0000:00:13.0 00:14:27.205 Attached to 0000:00:12.0 00:14:27.205 0000:00:10.0: get features failed as expected 00:14:27.205 0000:00:11.0: get features failed as expected 00:14:27.205 0000:00:13.0: get features failed as expected 00:14:27.205 0000:00:12.0: get features failed as expected 00:14:27.205 0000:00:13.0: get features successfully as expected 00:14:27.205 0000:00:12.0: get features successfully as expected 00:14:27.205 0000:00:10.0: get features successfully as expected 00:14:27.205 0000:00:11.0: get features successfully as expected 00:14:27.205 0000:00:10.0: read failed as expected 00:14:27.205 0000:00:11.0: read failed as expected 00:14:27.205 0000:00:13.0: read failed as expected 00:14:27.205 0000:00:12.0: read failed as expected 00:14:27.205 0000:00:10.0: read successfully as expected 00:14:27.205 0000:00:11.0: read successfully as expected 00:14:27.205 0000:00:13.0: read successfully as expected 00:14:27.205 0000:00:12.0: read successfully as expected 00:14:27.205 Cleaning up... 00:14:27.205 00:14:27.205 real 0m0.394s 00:14:27.205 user 0m0.155s 00:14:27.205 sys 0m0.193s 00:14:27.205 ************************************ 00:14:27.205 END TEST nvme_err_injection 00:14:27.205 ************************************ 00:14:27.205 15:26:13 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.205 15:26:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:27.541 15:26:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:27.541 15:26:13 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:14:27.541 15:26:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.541 15:26:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.541 ************************************ 00:14:27.541 START TEST nvme_overhead 00:14:27.541 ************************************ 00:14:27.541 15:26:13 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:28.920 Initializing NVMe Controllers 00:14:28.920 Attached to 0000:00:10.0 00:14:28.920 Attached to 0000:00:11.0 00:14:28.920 Attached to 0000:00:13.0 00:14:28.920 Attached to 0000:00:12.0 00:14:28.920 Initialization complete. Launching workers. 00:14:28.920 submit (in ns) avg, min, max = 15654.5, 12255.2, 148431.4 00:14:28.920 complete (in ns) avg, min, max = 10547.0, 8018.1, 49210.5 00:14:28.920 00:14:28.920 Submit histogram 00:14:28.920 ================ 00:14:28.920 Range in us Cumulative Count 00:14:28.920 12.251 - 12.312: 0.0230% ( 2) 00:14:28.920 12.312 - 12.373: 0.0805% ( 5) 00:14:28.920 12.373 - 12.434: 0.5062% ( 37) 00:14:28.920 12.434 - 12.495: 1.1849% ( 59) 00:14:28.920 12.495 - 12.556: 2.5883% ( 122) 00:14:28.920 12.556 - 12.617: 4.1528% ( 136) 00:14:28.920 12.617 - 12.678: 5.3146% ( 101) 00:14:28.920 12.678 - 12.739: 6.5455% ( 107) 00:14:28.920 12.739 - 12.800: 7.4658% ( 80) 00:14:28.920 12.800 - 12.861: 8.1905% ( 63) 00:14:28.920 12.861 - 12.922: 8.7887% ( 52) 00:14:28.920 12.922 - 12.983: 9.1798% ( 34) 00:14:28.920 12.983 - 13.044: 9.4329% ( 22) 00:14:28.920 13.044 - 13.105: 9.6745% ( 21) 00:14:28.920 13.105 - 13.166: 9.8240% ( 13) 00:14:28.920 13.166 - 13.227: 9.9620% ( 12) 00:14:28.920 13.227 - 13.288: 10.2841% ( 28) 00:14:28.920 13.288 - 13.349: 10.8938% ( 53) 00:14:28.920 13.349 - 13.410: 12.5043% ( 140) 00:14:28.920 13.410 - 13.470: 15.9324% ( 298) 00:14:28.920 13.470 - 13.531: 20.3267% ( 382) 00:14:28.920 13.531 - 13.592: 26.7111% ( 555) 00:14:28.920 13.592 - 13.653: 32.9345% ( 541) 00:14:28.920 13.653 - 13.714: 38.2377% ( 461) 00:14:28.920 13.714 - 13.775: 42.2064% ( 345) 00:14:28.920 13.775 - 13.836: 45.5654% ( 292) 00:14:28.920 13.836 - 13.897: 47.7626% ( 191) 00:14:28.920 13.897 - 13.958: 49.5801% ( 158) 00:14:28.920 13.958 - 14.019: 51.0181% ( 125) 00:14:28.920 14.019 - 14.080: 52.0994% ( 94) 00:14:28.920 14.080 - 14.141: 52.7781% ( 59) 00:14:28.920 14.141 - 14.202: 53.6524% ( 76) 00:14:28.920 14.202 - 14.263: 54.7222% ( 93) 00:14:28.920 14.263 - 14.324: 56.4247% ( 148) 00:14:28.920 14.324 - 14.385: 58.1847% ( 153) 00:14:28.920 14.385 - 14.446: 60.3359% ( 187) 00:14:28.920 14.446 - 14.507: 62.2340% ( 165) 00:14:28.920 14.507 - 14.568: 63.8790% ( 143) 00:14:28.920 14.568 - 14.629: 65.0178% ( 99) 00:14:28.920 14.629 - 14.690: 65.8921% ( 76) 00:14:28.920 14.690 - 14.750: 66.6858% ( 69) 00:14:28.920 14.750 - 14.811: 67.2265% ( 47) 00:14:28.920 14.811 - 14.872: 67.7672% ( 47) 00:14:28.920 14.872 - 14.933: 68.2158% ( 39) 00:14:28.920 14.933 - 14.994: 68.5149% ( 26) 00:14:28.920 14.994 - 15.055: 68.7680% ( 22) 00:14:28.920 15.055 - 15.116: 69.0441% ( 24) 00:14:28.920 15.116 - 15.177: 69.2971% ( 22) 00:14:28.920 15.177 - 15.238: 69.5272% ( 20) 00:14:28.920 15.238 - 15.299: 69.6537% ( 11) 00:14:28.920 15.299 - 15.360: 69.8953% ( 21) 00:14:28.920 15.360 - 15.421: 69.9873% ( 8) 00:14:28.920 15.421 - 15.482: 70.1484% ( 14) 00:14:28.920 15.482 - 15.543: 70.1944% ( 4) 00:14:28.920 15.543 - 15.604: 70.2749% ( 7) 00:14:28.920 15.604 - 15.726: 70.4130% ( 12) 00:14:28.920 15.726 - 15.848: 70.4935% ( 7) 00:14:28.920 15.848 - 15.970: 70.5395% ( 4) 00:14:28.920 15.970 - 16.091: 70.5625% ( 2) 00:14:28.920 16.091 - 16.213: 70.5855% ( 2) 00:14:28.920 16.213 - 16.335: 70.6200% ( 3) 00:14:28.920 16.335 - 16.457: 70.6315% ( 1) 00:14:28.920 16.457 - 16.579: 70.6430% ( 1) 00:14:28.920 16.579 - 16.701: 70.6545% ( 1) 00:14:28.920 16.823 - 16.945: 70.6661% ( 1) 00:14:28.920 16.945 - 17.067: 70.6776% ( 1) 00:14:28.920 17.554 - 17.676: 70.6891% ( 1) 00:14:28.920 17.798 - 17.920: 70.7006% ( 1) 00:14:28.920 17.920 - 18.042: 70.7121% ( 1) 00:14:28.920 18.042 - 18.164: 70.7581% ( 4) 00:14:28.920 18.164 - 18.286: 70.7811% ( 2) 00:14:28.920 18.286 - 18.408: 70.8731% ( 8) 00:14:28.920 18.408 - 18.530: 71.0227% ( 13) 00:14:28.920 18.530 - 18.651: 71.2757% ( 22) 00:14:28.920 18.651 - 18.773: 72.4606% ( 103) 00:14:28.920 18.773 - 18.895: 77.3151% ( 422) 00:14:28.920 18.895 - 19.017: 83.5270% ( 540) 00:14:28.920 19.017 - 19.139: 87.4612% ( 342) 00:14:28.920 19.139 - 19.261: 90.2105% ( 239) 00:14:28.920 19.261 - 19.383: 91.6369% ( 124) 00:14:28.920 19.383 - 19.505: 92.8333% ( 104) 00:14:28.920 19.505 - 19.627: 93.5580% ( 63) 00:14:28.920 19.627 - 19.749: 94.1677% ( 53) 00:14:28.920 19.749 - 19.870: 94.5934% ( 37) 00:14:28.920 19.870 - 19.992: 95.0075% ( 36) 00:14:28.920 19.992 - 20.114: 95.2606% ( 22) 00:14:28.921 20.114 - 20.236: 95.5021% ( 21) 00:14:28.921 20.236 - 20.358: 95.6402% ( 12) 00:14:28.921 20.358 - 20.480: 95.8012% ( 14) 00:14:28.921 20.480 - 20.602: 95.9048% ( 9) 00:14:28.921 20.602 - 20.724: 96.0658% ( 14) 00:14:28.921 20.724 - 20.846: 96.1578% ( 8) 00:14:28.921 20.846 - 20.968: 96.1808% ( 2) 00:14:28.921 20.968 - 21.090: 96.2384% ( 5) 00:14:28.921 21.090 - 21.211: 96.3074% ( 6) 00:14:28.921 21.211 - 21.333: 96.3304% ( 2) 00:14:28.921 21.333 - 21.455: 96.3764% ( 4) 00:14:28.921 21.455 - 21.577: 96.4224% ( 4) 00:14:28.921 21.577 - 21.699: 96.4454% ( 2) 00:14:28.921 21.699 - 21.821: 96.5374% ( 8) 00:14:28.921 21.821 - 21.943: 96.6180% ( 7) 00:14:28.921 21.943 - 22.065: 96.6525% ( 3) 00:14:28.921 22.065 - 22.187: 96.6755% ( 2) 00:14:28.921 22.187 - 22.309: 96.7215% ( 4) 00:14:28.921 22.309 - 22.430: 96.7560% ( 3) 00:14:28.921 22.430 - 22.552: 96.8020% ( 4) 00:14:28.921 22.552 - 22.674: 96.8365% ( 3) 00:14:28.921 22.674 - 22.796: 96.8595% ( 2) 00:14:28.921 22.796 - 22.918: 96.8941% ( 3) 00:14:28.921 22.918 - 23.040: 96.9056% ( 1) 00:14:28.921 23.162 - 23.284: 96.9171% ( 1) 00:14:28.921 23.284 - 23.406: 96.9746% ( 5) 00:14:28.921 23.406 - 23.528: 97.0321% ( 5) 00:14:28.921 23.528 - 23.650: 97.0436% ( 1) 00:14:28.921 23.650 - 23.771: 97.0666% ( 2) 00:14:28.921 23.771 - 23.893: 97.0896% ( 2) 00:14:28.921 23.893 - 24.015: 97.1126% ( 2) 00:14:28.921 24.015 - 24.137: 97.1241% ( 1) 00:14:28.921 24.137 - 24.259: 97.1471% ( 2) 00:14:28.921 24.259 - 24.381: 97.1586% ( 1) 00:14:28.921 24.503 - 24.625: 97.1816% ( 2) 00:14:28.921 24.625 - 24.747: 97.2162% ( 3) 00:14:28.921 24.747 - 24.869: 97.2622% ( 4) 00:14:28.921 24.869 - 24.990: 97.3427% ( 7) 00:14:28.921 24.990 - 25.112: 97.4692% ( 11) 00:14:28.921 25.112 - 25.234: 97.7223% ( 22) 00:14:28.921 25.234 - 25.356: 97.9409% ( 19) 00:14:28.921 25.356 - 25.478: 98.0329% ( 8) 00:14:28.921 25.478 - 25.600: 98.2285% ( 17) 00:14:28.921 25.600 - 25.722: 98.3435% ( 10) 00:14:28.921 25.722 - 25.844: 98.4240% ( 7) 00:14:28.921 25.844 - 25.966: 98.5045% ( 7) 00:14:28.921 25.966 - 26.088: 98.5276% ( 2) 00:14:28.921 26.088 - 26.210: 98.5851% ( 5) 00:14:28.921 26.210 - 26.331: 98.6196% ( 3) 00:14:28.921 26.331 - 26.453: 98.6426% ( 2) 00:14:28.921 26.453 - 26.575: 98.6656% ( 2) 00:14:28.921 26.575 - 26.697: 98.6771% ( 1) 00:14:28.921 26.697 - 26.819: 98.7001% ( 2) 00:14:28.921 26.941 - 27.063: 98.7116% ( 1) 00:14:28.921 27.063 - 27.185: 98.7461% ( 3) 00:14:28.921 27.185 - 27.307: 98.7921% ( 4) 00:14:28.921 27.429 - 27.550: 98.8151% ( 2) 00:14:28.921 27.550 - 27.672: 98.8266% ( 1) 00:14:28.921 27.672 - 27.794: 98.8612% ( 3) 00:14:28.921 27.794 - 27.916: 98.9072% ( 4) 00:14:28.921 27.916 - 28.038: 98.9302% ( 2) 00:14:28.921 28.038 - 28.160: 98.9762% ( 4) 00:14:28.921 28.160 - 28.282: 99.0337% ( 5) 00:14:28.921 28.282 - 28.404: 99.0797% ( 4) 00:14:28.921 28.404 - 28.526: 99.1142% ( 3) 00:14:28.921 28.526 - 28.648: 99.1372% ( 2) 00:14:28.921 28.648 - 28.770: 99.1833% ( 4) 00:14:28.921 28.770 - 28.891: 99.2063% ( 2) 00:14:28.921 29.013 - 29.135: 99.2523% ( 4) 00:14:28.921 29.135 - 29.257: 99.2868% ( 3) 00:14:28.921 29.501 - 29.623: 99.2983% ( 1) 00:14:28.921 29.623 - 29.745: 99.3558% ( 5) 00:14:28.921 29.745 - 29.867: 99.4133% ( 5) 00:14:28.921 29.867 - 29.989: 99.4478% ( 3) 00:14:28.921 29.989 - 30.110: 99.4593% ( 1) 00:14:28.921 30.110 - 30.232: 99.4938% ( 3) 00:14:28.921 30.232 - 30.354: 99.5284% ( 3) 00:14:28.921 30.354 - 30.476: 99.5859% ( 5) 00:14:28.921 30.476 - 30.598: 99.6204% ( 3) 00:14:28.921 30.598 - 30.720: 99.6434% ( 2) 00:14:28.921 30.720 - 30.842: 99.6549% ( 1) 00:14:28.921 30.842 - 30.964: 99.6664% ( 1) 00:14:28.921 30.964 - 31.086: 99.7009% ( 3) 00:14:28.921 31.939 - 32.183: 99.7239% ( 2) 00:14:28.921 32.183 - 32.427: 99.7354% ( 1) 00:14:28.921 32.670 - 32.914: 99.7469% ( 1) 00:14:28.921 32.914 - 33.158: 99.7584% ( 1) 00:14:28.921 33.646 - 33.890: 99.7699% ( 1) 00:14:28.921 35.352 - 35.596: 99.7814% ( 1) 00:14:28.921 35.596 - 35.840: 99.8044% ( 2) 00:14:28.921 36.084 - 36.328: 99.8159% ( 1) 00:14:28.921 37.059 - 37.303: 99.8274% ( 1) 00:14:28.921 37.303 - 37.547: 99.8390% ( 1) 00:14:28.921 37.547 - 37.790: 99.8505% ( 1) 00:14:28.921 38.034 - 38.278: 99.8735% ( 2) 00:14:28.921 38.278 - 38.522: 99.8850% ( 1) 00:14:28.921 38.766 - 39.010: 99.8965% ( 1) 00:14:28.921 42.423 - 42.667: 99.9080% ( 1) 00:14:28.921 46.324 - 46.568: 99.9195% ( 1) 00:14:28.921 56.808 - 57.051: 99.9310% ( 1) 00:14:28.921 59.977 - 60.221: 99.9425% ( 1) 00:14:28.921 70.705 - 71.192: 99.9540% ( 1) 00:14:28.921 76.556 - 77.044: 99.9655% ( 1) 00:14:28.921 97.036 - 97.524: 99.9770% ( 1) 00:14:28.921 113.128 - 113.615: 99.9885% ( 1) 00:14:28.921 148.236 - 149.211: 100.0000% ( 1) 00:14:28.921 00:14:28.921 Complete histogram 00:14:28.921 ================== 00:14:28.921 Range in us Cumulative Count 00:14:28.921 7.985 - 8.046: 0.0230% ( 2) 00:14:28.921 8.046 - 8.107: 0.1150% ( 8) 00:14:28.921 8.107 - 8.168: 0.2301% ( 10) 00:14:28.921 8.168 - 8.229: 0.2876% ( 5) 00:14:28.921 8.229 - 8.290: 0.4716% ( 16) 00:14:28.921 8.290 - 8.350: 0.6327% ( 14) 00:14:28.921 8.350 - 8.411: 0.9893% ( 31) 00:14:28.921 8.411 - 8.472: 1.5875% ( 52) 00:14:28.921 8.472 - 8.533: 2.1397% ( 48) 00:14:28.921 8.533 - 8.594: 2.9219% ( 68) 00:14:28.921 8.594 - 8.655: 3.9112% ( 86) 00:14:28.921 8.655 - 8.716: 5.4412% ( 133) 00:14:28.921 8.716 - 8.777: 8.5701% ( 272) 00:14:28.921 8.777 - 8.838: 14.5634% ( 521) 00:14:28.921 8.838 - 8.899: 20.9364% ( 554) 00:14:28.921 8.899 - 8.960: 25.6183% ( 407) 00:14:28.921 8.960 - 9.021: 28.7243% ( 270) 00:14:28.921 9.021 - 9.082: 31.4391% ( 236) 00:14:28.921 9.082 - 9.143: 33.8203% ( 207) 00:14:28.921 9.143 - 9.204: 36.1670% ( 204) 00:14:28.921 9.204 - 9.265: 38.3987% ( 194) 00:14:28.921 9.265 - 9.326: 41.9303% ( 307) 00:14:28.921 9.326 - 9.387: 46.2441% ( 375) 00:14:28.921 9.387 - 9.448: 50.2473% ( 348) 00:14:28.921 9.448 - 9.509: 53.5028% ( 283) 00:14:28.921 9.509 - 9.570: 56.1371% ( 229) 00:14:28.921 9.570 - 9.630: 58.2538% ( 184) 00:14:28.921 9.630 - 9.691: 60.2209% ( 171) 00:14:28.922 9.691 - 9.752: 62.0959% ( 163) 00:14:28.922 9.752 - 9.813: 63.7294% ( 142) 00:14:28.922 9.813 - 9.874: 65.1099% ( 120) 00:14:28.922 9.874 - 9.935: 66.1682% ( 92) 00:14:28.922 9.935 - 9.996: 67.1115% ( 82) 00:14:28.922 9.996 - 10.057: 67.8592% ( 65) 00:14:28.922 10.057 - 10.118: 68.5724% ( 62) 00:14:28.922 10.118 - 10.179: 69.0441% ( 41) 00:14:28.922 10.179 - 10.240: 69.4467% ( 35) 00:14:28.922 10.240 - 10.301: 69.8033% ( 31) 00:14:28.922 10.301 - 10.362: 70.1139% ( 27) 00:14:28.922 10.362 - 10.423: 70.4245% ( 27) 00:14:28.922 10.423 - 10.484: 70.6085% ( 16) 00:14:28.922 10.484 - 10.545: 70.7811% ( 15) 00:14:28.922 10.545 - 10.606: 71.0112% ( 20) 00:14:28.922 10.606 - 10.667: 71.0802% ( 6) 00:14:28.922 10.667 - 10.728: 71.1837% ( 9) 00:14:28.922 10.728 - 10.789: 71.2642% ( 7) 00:14:28.922 10.789 - 10.850: 71.3908% ( 11) 00:14:28.922 10.850 - 10.910: 71.4943% ( 9) 00:14:28.922 10.910 - 10.971: 71.5403% ( 4) 00:14:28.922 10.971 - 11.032: 71.5518% ( 1) 00:14:28.922 11.032 - 11.093: 71.5748% ( 2) 00:14:28.922 11.093 - 11.154: 71.6093% ( 3) 00:14:28.922 11.154 - 11.215: 71.6554% ( 4) 00:14:28.922 11.276 - 11.337: 71.6784% ( 2) 00:14:28.922 11.337 - 11.398: 71.6899% ( 1) 00:14:28.922 11.581 - 11.642: 71.7014% ( 1) 00:14:28.922 11.825 - 11.886: 71.7129% ( 1) 00:14:28.922 12.556 - 12.617: 71.9429% ( 20) 00:14:28.922 12.617 - 12.678: 73.5304% ( 138) 00:14:28.922 12.678 - 12.739: 77.1540% ( 315) 00:14:28.922 12.739 - 12.800: 81.3643% ( 366) 00:14:28.922 12.800 - 12.861: 84.6773% ( 288) 00:14:28.922 12.861 - 12.922: 87.1621% ( 216) 00:14:28.922 12.922 - 12.983: 89.1752% ( 175) 00:14:28.922 12.983 - 13.044: 90.6592% ( 129) 00:14:28.922 13.044 - 13.105: 91.7635% ( 96) 00:14:28.922 13.105 - 13.166: 92.7183% ( 83) 00:14:28.922 13.166 - 13.227: 93.4660% ( 65) 00:14:28.922 13.227 - 13.288: 94.0757% ( 53) 00:14:28.922 13.288 - 13.349: 94.3978% ( 28) 00:14:28.922 13.349 - 13.410: 94.6739% ( 24) 00:14:28.922 13.410 - 13.470: 94.8004% ( 11) 00:14:28.922 13.470 - 13.531: 94.9385% ( 12) 00:14:28.922 13.531 - 13.592: 95.0535% ( 10) 00:14:28.922 13.592 - 13.653: 95.1455% ( 8) 00:14:28.922 13.653 - 13.714: 95.1800% ( 3) 00:14:28.922 13.714 - 13.775: 95.2491% ( 6) 00:14:28.922 13.775 - 13.836: 95.2836% ( 3) 00:14:28.922 13.836 - 13.897: 95.3641% ( 7) 00:14:28.922 13.897 - 13.958: 95.4331% ( 6) 00:14:28.922 13.958 - 14.019: 95.4676% ( 3) 00:14:28.922 14.019 - 14.080: 95.4906% ( 2) 00:14:28.922 14.080 - 14.141: 95.5366% ( 4) 00:14:28.922 14.141 - 14.202: 95.6172% ( 7) 00:14:28.922 14.202 - 14.263: 95.7207% ( 9) 00:14:28.922 14.263 - 14.324: 95.8472% ( 11) 00:14:28.922 14.324 - 14.385: 96.0198% ( 15) 00:14:28.922 14.385 - 14.446: 96.1808% ( 14) 00:14:28.922 14.446 - 14.507: 96.2384% ( 5) 00:14:28.922 14.507 - 14.568: 96.3649% ( 11) 00:14:28.922 14.568 - 14.629: 96.4339% ( 6) 00:14:28.922 14.629 - 14.690: 96.4914% ( 5) 00:14:28.922 14.690 - 14.750: 96.5374% ( 4) 00:14:28.922 14.750 - 14.811: 96.5835% ( 4) 00:14:28.922 14.811 - 14.872: 96.6295% ( 4) 00:14:28.922 14.872 - 14.933: 96.6525% ( 2) 00:14:28.922 14.933 - 14.994: 96.6755% ( 2) 00:14:28.922 15.055 - 15.116: 96.6985% ( 2) 00:14:28.922 15.116 - 15.177: 96.7445% ( 4) 00:14:28.922 15.177 - 15.238: 96.7675% ( 2) 00:14:28.922 15.238 - 15.299: 96.8020% ( 3) 00:14:28.922 15.360 - 15.421: 96.8135% ( 1) 00:14:28.922 15.421 - 15.482: 96.8250% ( 1) 00:14:28.922 15.482 - 15.543: 96.8595% ( 3) 00:14:28.922 15.543 - 15.604: 96.8941% ( 3) 00:14:28.922 15.604 - 15.726: 96.9631% ( 6) 00:14:28.922 15.726 - 15.848: 97.0436% ( 7) 00:14:28.922 15.848 - 15.970: 97.0666% ( 2) 00:14:28.922 15.970 - 16.091: 97.1586% ( 8) 00:14:28.922 16.091 - 16.213: 97.2277% ( 6) 00:14:28.922 16.213 - 16.335: 97.2852% ( 5) 00:14:28.922 16.335 - 16.457: 97.3312% ( 4) 00:14:28.922 16.457 - 16.579: 97.3887% ( 5) 00:14:28.922 16.579 - 16.701: 97.4232% ( 3) 00:14:28.922 16.701 - 16.823: 97.4577% ( 3) 00:14:28.922 16.823 - 16.945: 97.4922% ( 3) 00:14:28.922 16.945 - 17.067: 97.5152% ( 2) 00:14:28.922 17.067 - 17.189: 97.5498% ( 3) 00:14:28.922 17.189 - 17.310: 97.6073% ( 5) 00:14:28.922 17.310 - 17.432: 97.6418% ( 3) 00:14:28.922 17.432 - 17.554: 97.6533% ( 1) 00:14:28.922 17.554 - 17.676: 97.6993% ( 4) 00:14:28.922 17.676 - 17.798: 97.7223% ( 2) 00:14:28.922 17.798 - 17.920: 97.7338% ( 1) 00:14:28.922 17.920 - 18.042: 97.7798% ( 4) 00:14:28.922 18.042 - 18.164: 97.8028% ( 2) 00:14:28.922 18.164 - 18.286: 97.8258% ( 2) 00:14:28.922 18.286 - 18.408: 97.8488% ( 2) 00:14:28.922 18.408 - 18.530: 97.8603% ( 1) 00:14:28.922 18.651 - 18.773: 97.8719% ( 1) 00:14:28.922 19.139 - 19.261: 97.8834% ( 1) 00:14:28.922 19.505 - 19.627: 97.9064% ( 2) 00:14:28.922 19.749 - 19.870: 97.9179% ( 1) 00:14:28.922 19.870 - 19.992: 97.9409% ( 2) 00:14:28.922 19.992 - 20.114: 98.0099% ( 6) 00:14:28.922 20.114 - 20.236: 98.0214% ( 1) 00:14:28.922 20.236 - 20.358: 98.0904% ( 6) 00:14:28.922 20.358 - 20.480: 98.1594% ( 6) 00:14:28.922 20.480 - 20.602: 98.2515% ( 8) 00:14:28.922 20.602 - 20.724: 98.3435% ( 8) 00:14:28.922 20.724 - 20.846: 98.4125% ( 6) 00:14:28.922 20.846 - 20.968: 98.5276% ( 10) 00:14:28.922 20.968 - 21.090: 98.6311% ( 9) 00:14:28.922 21.090 - 21.211: 98.7116% ( 7) 00:14:28.922 21.211 - 21.333: 98.8151% ( 9) 00:14:28.922 21.333 - 21.455: 98.8957% ( 7) 00:14:28.922 21.455 - 21.577: 98.9762% ( 7) 00:14:28.922 21.577 - 21.699: 99.0222% ( 4) 00:14:28.922 21.699 - 21.821: 99.0452% ( 2) 00:14:28.922 21.821 - 21.943: 99.1027% ( 5) 00:14:28.922 21.943 - 22.065: 99.1602% ( 5) 00:14:28.922 22.065 - 22.187: 99.2063% ( 4) 00:14:28.922 22.187 - 22.309: 99.2293% ( 2) 00:14:28.922 22.309 - 22.430: 99.3098% ( 7) 00:14:28.922 22.430 - 22.552: 99.3443% ( 3) 00:14:28.922 22.552 - 22.674: 99.3673% ( 2) 00:14:28.922 22.674 - 22.796: 99.4133% ( 4) 00:14:28.922 22.796 - 22.918: 99.4478% ( 3) 00:14:28.922 22.918 - 23.040: 99.5053% ( 5) 00:14:28.922 23.040 - 23.162: 99.5284% ( 2) 00:14:28.922 23.162 - 23.284: 99.5399% ( 1) 00:14:28.922 23.284 - 23.406: 99.5514% ( 1) 00:14:28.922 23.406 - 23.528: 99.5629% ( 1) 00:14:28.922 23.650 - 23.771: 99.5744% ( 1) 00:14:28.922 23.771 - 23.893: 99.5859% ( 1) 00:14:28.922 23.893 - 24.015: 99.5974% ( 1) 00:14:28.922 24.015 - 24.137: 99.6089% ( 1) 00:14:28.922 24.137 - 24.259: 99.6434% ( 3) 00:14:28.922 24.381 - 24.503: 99.6549% ( 1) 00:14:28.922 24.503 - 24.625: 99.6664% ( 1) 00:14:28.922 24.990 - 25.112: 99.6894% ( 2) 00:14:28.922 25.356 - 25.478: 99.7239% ( 3) 00:14:28.922 25.478 - 25.600: 99.7469% ( 2) 00:14:28.922 25.600 - 25.722: 99.7584% ( 1) 00:14:28.922 25.722 - 25.844: 99.7699% ( 1) 00:14:28.922 25.844 - 25.966: 99.7814% ( 1) 00:14:28.922 26.088 - 26.210: 99.8159% ( 3) 00:14:28.922 26.331 - 26.453: 99.8274% ( 1) 00:14:28.922 26.453 - 26.575: 99.8390% ( 1) 00:14:28.922 26.941 - 27.063: 99.8505% ( 1) 00:14:28.922 27.063 - 27.185: 99.8620% ( 1) 00:14:28.922 27.185 - 27.307: 99.8735% ( 1) 00:14:28.922 27.429 - 27.550: 99.8850% ( 1) 00:14:28.922 27.550 - 27.672: 99.8965% ( 1) 00:14:28.922 27.916 - 28.038: 99.9080% ( 1) 00:14:28.922 28.770 - 28.891: 99.9195% ( 1) 00:14:28.922 29.745 - 29.867: 99.9310% ( 1) 00:14:28.922 32.427 - 32.670: 99.9425% ( 1) 00:14:28.922 33.646 - 33.890: 99.9540% ( 1) 00:14:28.922 37.790 - 38.034: 99.9655% ( 1) 00:14:28.922 43.154 - 43.398: 99.9770% ( 1) 00:14:28.922 48.518 - 48.762: 99.9885% ( 1) 00:14:28.923 49.006 - 49.250: 100.0000% ( 1) 00:14:28.923 00:14:28.923 00:14:28.923 real 0m1.391s 00:14:28.923 user 0m1.142s 00:14:28.923 sys 0m0.184s 00:14:28.923 15:26:14 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.923 15:26:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:28.923 ************************************ 00:14:28.923 END TEST nvme_overhead 00:14:28.923 ************************************ 00:14:28.923 15:26:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:28.923 15:26:14 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:28.923 15:26:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.923 15:26:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.923 ************************************ 00:14:28.923 START TEST nvme_arbitration 00:14:28.923 ************************************ 00:14:28.923 15:26:14 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:32.213 Initializing NVMe Controllers 00:14:32.213 Attached to 0000:00:10.0 00:14:32.213 Attached to 0000:00:11.0 00:14:32.213 Attached to 0000:00:13.0 00:14:32.213 Attached to 0000:00:12.0 00:14:32.213 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:32.213 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:32.213 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:32.213 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:32.213 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:32.213 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:32.213 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:32.213 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:32.213 Initialization complete. Launching workers. 00:14:32.213 Starting thread on core 1 with urgent priority queue 00:14:32.213 Starting thread on core 2 with urgent priority queue 00:14:32.213 Starting thread on core 3 with urgent priority queue 00:14:32.213 Starting thread on core 0 with urgent priority queue 00:14:32.213 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:32.213 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:32.213 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:14:32.213 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:14:32.213 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:14:32.213 QEMU NVMe Ctrl (12342 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:14:32.213 ======================================================== 00:14:32.213 00:14:32.213 ************************************ 00:14:32.213 END TEST nvme_arbitration 00:14:32.213 ************************************ 00:14:32.213 00:14:32.213 real 0m3.524s 00:14:32.213 user 0m9.441s 00:14:32.213 sys 0m0.226s 00:14:32.213 15:26:18 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.213 15:26:18 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:32.471 15:26:18 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:32.471 15:26:18 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:32.471 15:26:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.471 15:26:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.471 ************************************ 00:14:32.471 START TEST nvme_single_aen 00:14:32.471 ************************************ 00:14:32.471 15:26:18 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:32.730 Asynchronous Event Request test 00:14:32.730 Attached to 0000:00:10.0 00:14:32.730 Attached to 0000:00:11.0 00:14:32.730 Attached to 0000:00:13.0 00:14:32.730 Attached to 0000:00:12.0 00:14:32.730 Reset controller to setup AER completions for this process 00:14:32.730 Registering asynchronous event callbacks... 00:14:32.730 Getting orig temperature thresholds of all controllers 00:14:32.730 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.730 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.730 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.730 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:32.730 Setting all controllers temperature threshold low to trigger AER 00:14:32.730 Waiting for all controllers temperature threshold to be set lower 00:14:32.730 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.730 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:32.730 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.730 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:32.730 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.730 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:32.730 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:32.730 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:32.730 Waiting for all controllers to trigger AER and reset threshold 00:14:32.730 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.730 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.730 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.730 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:32.730 Cleaning up... 00:14:32.730 00:14:32.730 real 0m0.361s 00:14:32.730 user 0m0.124s 00:14:32.730 sys 0m0.190s 00:14:32.730 ************************************ 00:14:32.730 END TEST nvme_single_aen 00:14:32.730 ************************************ 00:14:32.730 15:26:18 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.730 15:26:18 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 15:26:18 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:32.730 15:26:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.730 15:26:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.730 15:26:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 ************************************ 00:14:32.730 START TEST nvme_doorbell_aers 00:14:32.730 ************************************ 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:32.730 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:32.989 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:32.989 15:26:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:32.989 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:32.989 15:26:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:33.247 [2024-11-20 15:26:19.077124] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:14:43.230 Executing: test_write_invalid_db 00:14:43.230 Waiting for AER completion... 00:14:43.230 Failure: test_write_invalid_db 00:14:43.230 00:14:43.230 Executing: test_invalid_db_write_overflow_sq 00:14:43.230 Waiting for AER completion... 00:14:43.230 Failure: test_invalid_db_write_overflow_sq 00:14:43.230 00:14:43.230 Executing: test_invalid_db_write_overflow_cq 00:14:43.230 Waiting for AER completion... 00:14:43.230 Failure: test_invalid_db_write_overflow_cq 00:14:43.230 00:14:43.230 15:26:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:43.230 15:26:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:43.230 [2024-11-20 15:26:29.130598] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:14:53.217 Executing: test_write_invalid_db 00:14:53.217 Waiting for AER completion... 00:14:53.217 Failure: test_write_invalid_db 00:14:53.217 00:14:53.217 Executing: test_invalid_db_write_overflow_sq 00:14:53.217 Waiting for AER completion... 00:14:53.217 Failure: test_invalid_db_write_overflow_sq 00:14:53.217 00:14:53.217 Executing: test_invalid_db_write_overflow_cq 00:14:53.217 Waiting for AER completion... 00:14:53.217 Failure: test_invalid_db_write_overflow_cq 00:14:53.217 00:14:53.217 15:26:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:53.217 15:26:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:53.485 [2024-11-20 15:26:39.173459] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:03.574 Executing: test_write_invalid_db 00:15:03.574 Waiting for AER completion... 00:15:03.574 Failure: test_write_invalid_db 00:15:03.574 00:15:03.574 Executing: test_invalid_db_write_overflow_sq 00:15:03.574 Waiting for AER completion... 00:15:03.574 Failure: test_invalid_db_write_overflow_sq 00:15:03.574 00:15:03.574 Executing: test_invalid_db_write_overflow_cq 00:15:03.574 Waiting for AER completion... 00:15:03.574 Failure: test_invalid_db_write_overflow_cq 00:15:03.574 00:15:03.574 15:26:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:03.574 15:26:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:03.574 [2024-11-20 15:26:49.260025] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 Executing: test_write_invalid_db 00:15:13.640 Waiting for AER completion... 00:15:13.640 Failure: test_write_invalid_db 00:15:13.640 00:15:13.640 Executing: test_invalid_db_write_overflow_sq 00:15:13.640 Waiting for AER completion... 00:15:13.640 Failure: test_invalid_db_write_overflow_sq 00:15:13.640 00:15:13.640 Executing: test_invalid_db_write_overflow_cq 00:15:13.640 Waiting for AER completion... 00:15:13.640 Failure: test_invalid_db_write_overflow_cq 00:15:13.640 00:15:13.640 ************************************ 00:15:13.640 END TEST nvme_doorbell_aers 00:15:13.640 ************************************ 00:15:13.640 00:15:13.640 real 0m40.309s 00:15:13.640 user 0m28.521s 00:15:13.640 sys 0m11.390s 00:15:13.640 15:26:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.640 15:26:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:15:13.640 15:26:59 nvme -- nvme/nvme.sh@97 -- # uname 00:15:13.640 15:26:59 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:15:13.640 15:26:59 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:13.640 15:26:59 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:13.640 15:26:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.640 15:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.640 ************************************ 00:15:13.640 START TEST nvme_multi_aen 00:15:13.640 ************************************ 00:15:13.640 15:26:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:13.640 [2024-11-20 15:26:59.265881] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.266237] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.266266] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.268269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.268307] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.268323] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.269850] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.269895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.640 [2024-11-20 15:26:59.269910] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.641 [2024-11-20 15:26:59.271524] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.641 [2024-11-20 15:26:59.271692] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.641 [2024-11-20 15:26:59.271713] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64758) is not found. Dropping the request. 00:15:13.641 Child process pid: 65279 00:15:13.933 [Child] Asynchronous Event Request test 00:15:13.933 [Child] Attached to 0000:00:10.0 00:15:13.933 [Child] Attached to 0000:00:11.0 00:15:13.933 [Child] Attached to 0000:00:13.0 00:15:13.933 [Child] Attached to 0000:00:12.0 00:15:13.933 [Child] Registering asynchronous event callbacks... 00:15:13.933 [Child] Getting orig temperature thresholds of all controllers 00:15:13.933 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 [Child] Waiting for all controllers to trigger AER and reset threshold 00:15:13.933 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 [Child] Cleaning up... 00:15:13.933 Asynchronous Event Request test 00:15:13.933 Attached to 0000:00:10.0 00:15:13.933 Attached to 0000:00:11.0 00:15:13.933 Attached to 0000:00:13.0 00:15:13.933 Attached to 0000:00:12.0 00:15:13.933 Reset controller to setup AER completions for this process 00:15:13.933 Registering asynchronous event callbacks... 00:15:13.933 Getting orig temperature thresholds of all controllers 00:15:13.933 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:13.933 Setting all controllers temperature threshold low to trigger AER 00:15:13.933 Waiting for all controllers temperature threshold to be set lower 00:15:13.933 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:13.933 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:13.933 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:13.933 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:13.933 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:13.933 Waiting for all controllers to trigger AER and reset threshold 00:15:13.933 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:13.933 Cleaning up... 00:15:13.933 ************************************ 00:15:13.933 00:15:13.933 real 0m0.691s 00:15:13.933 user 0m0.253s 00:15:13.933 sys 0m0.325s 00:15:13.933 15:26:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.933 15:26:59 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 END TEST nvme_multi_aen 00:15:13.933 ************************************ 00:15:13.933 15:26:59 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:13.933 15:26:59 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:13.933 15:26:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.933 15:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.933 ************************************ 00:15:13.933 START TEST nvme_startup 00:15:13.933 ************************************ 00:15:13.933 15:26:59 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:14.510 Initializing NVMe Controllers 00:15:14.510 Attached to 0000:00:10.0 00:15:14.510 Attached to 0000:00:11.0 00:15:14.510 Attached to 0000:00:13.0 00:15:14.510 Attached to 0000:00:12.0 00:15:14.510 Initialization complete. 00:15:14.510 Time used:276906.281 (us). 00:15:14.510 ************************************ 00:15:14.510 END TEST nvme_startup 00:15:14.510 ************************************ 00:15:14.510 00:15:14.510 real 0m0.399s 00:15:14.510 user 0m0.143s 00:15:14.510 sys 0m0.207s 00:15:14.510 15:27:00 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.510 15:27:00 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:15:14.510 15:27:00 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:15:14.510 15:27:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.510 15:27:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.510 15:27:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.510 ************************************ 00:15:14.510 START TEST nvme_multi_secondary 00:15:14.510 ************************************ 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65335 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65336 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:14.510 15:27:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:15:18.698 Initializing NVMe Controllers 00:15:18.698 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:18.698 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:18.698 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:18.698 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:18.698 Initialization complete. Launching workers. 00:15:18.698 ======================================================== 00:15:18.698 Latency(us) 00:15:18.698 Device Information : IOPS MiB/s Average min max 00:15:18.698 PCIE (0000:00:10.0) NSID 1 from core 2: 2612.41 10.20 6121.07 1412.35 16352.80 00:15:18.698 PCIE (0000:00:11.0) NSID 1 from core 2: 2612.41 10.20 6115.80 1480.92 12976.09 00:15:18.698 PCIE (0000:00:13.0) NSID 1 from core 2: 2612.41 10.20 6115.91 1538.74 13105.67 00:15:18.698 PCIE (0000:00:12.0) NSID 1 from core 2: 2612.41 10.20 6115.78 1412.05 13088.06 00:15:18.698 PCIE (0000:00:12.0) NSID 2 from core 2: 2612.41 10.20 6115.77 1448.77 13114.62 00:15:18.698 PCIE (0000:00:12.0) NSID 3 from core 2: 2612.41 10.20 6115.57 1501.92 12911.71 00:15:18.698 ======================================================== 00:15:18.698 Total : 15674.45 61.23 6116.65 1412.05 16352.80 00:15:18.698 00:15:18.698 15:27:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65335 00:15:18.698 Initializing NVMe Controllers 00:15:18.698 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:18.698 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:18.698 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:18.698 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:18.698 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:18.698 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:18.698 Initialization complete. Launching workers. 00:15:18.699 ======================================================== 00:15:18.699 Latency(us) 00:15:18.699 Device Information : IOPS MiB/s Average min max 00:15:18.699 PCIE (0000:00:10.0) NSID 1 from core 1: 5270.90 20.59 3033.67 1445.47 5994.18 00:15:18.699 PCIE (0000:00:11.0) NSID 1 from core 1: 5270.90 20.59 3034.90 1544.00 6061.22 00:15:18.699 PCIE (0000:00:13.0) NSID 1 from core 1: 5270.90 20.59 3034.76 1566.72 6017.63 00:15:18.699 PCIE (0000:00:12.0) NSID 1 from core 1: 5270.90 20.59 3034.96 1508.57 5688.84 00:15:18.699 PCIE (0000:00:12.0) NSID 2 from core 1: 5270.90 20.59 3035.03 1522.82 5831.81 00:15:18.699 PCIE (0000:00:12.0) NSID 3 from core 1: 5270.90 20.59 3035.06 1503.96 5824.28 00:15:18.699 ======================================================== 00:15:18.699 Total : 31625.38 123.54 3034.73 1445.47 6061.22 00:15:18.699 00:15:19.637 Initializing NVMe Controllers 00:15:19.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:19.637 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:19.637 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:19.637 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:19.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:19.637 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:19.637 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:19.637 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:19.637 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:19.637 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:19.637 Initialization complete. Launching workers. 00:15:19.637 ======================================================== 00:15:19.637 Latency(us) 00:15:19.637 Device Information : IOPS MiB/s Average min max 00:15:19.637 PCIE (0000:00:10.0) NSID 1 from core 0: 8121.38 31.72 1968.63 959.04 6145.29 00:15:19.637 PCIE (0000:00:11.0) NSID 1 from core 0: 8121.38 31.72 1969.63 970.37 5872.28 00:15:19.637 PCIE (0000:00:13.0) NSID 1 from core 0: 8121.38 31.72 1969.57 941.28 5745.76 00:15:19.637 PCIE (0000:00:12.0) NSID 1 from core 0: 8121.38 31.72 1969.50 920.91 5869.05 00:15:19.637 PCIE (0000:00:12.0) NSID 2 from core 0: 8121.38 31.72 1969.45 864.16 6636.28 00:15:19.637 PCIE (0000:00:12.0) NSID 3 from core 0: 8121.38 31.72 1969.38 824.99 6607.87 00:15:19.637 ======================================================== 00:15:19.637 Total : 48728.28 190.34 1969.36 824.99 6636.28 00:15:19.637 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65336 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65401 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65402 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:15:19.896 15:27:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:23.182 Initializing NVMe Controllers 00:15:23.182 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:23.182 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:23.182 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:23.182 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:23.182 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:23.182 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:23.182 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:23.182 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:23.182 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:23.182 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:23.182 Initialization complete. Launching workers. 00:15:23.183 ======================================================== 00:15:23.183 Latency(us) 00:15:23.183 Device Information : IOPS MiB/s Average min max 00:15:23.183 PCIE (0000:00:10.0) NSID 1 from core 1: 5061.81 19.77 3158.98 1040.20 7153.80 00:15:23.183 PCIE (0000:00:11.0) NSID 1 from core 1: 5061.81 19.77 3160.24 1076.88 6414.12 00:15:23.183 PCIE (0000:00:13.0) NSID 1 from core 1: 5061.81 19.77 3160.37 1068.78 6264.70 00:15:23.183 PCIE (0000:00:12.0) NSID 1 from core 1: 5061.81 19.77 3160.31 1066.57 6562.59 00:15:23.183 PCIE (0000:00:12.0) NSID 2 from core 1: 5061.81 19.77 3160.45 1070.83 6956.48 00:15:23.183 PCIE (0000:00:12.0) NSID 3 from core 1: 5061.81 19.77 3160.35 1068.56 6940.44 00:15:23.183 ======================================================== 00:15:23.183 Total : 30370.88 118.64 3160.12 1040.20 7153.80 00:15:23.183 00:15:23.440 Initializing NVMe Controllers 00:15:23.440 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:23.440 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:23.440 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:23.440 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:23.440 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:23.440 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:23.440 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:23.440 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:23.440 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:23.440 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:23.440 Initialization complete. Launching workers. 00:15:23.440 ======================================================== 00:15:23.440 Latency(us) 00:15:23.440 Device Information : IOPS MiB/s Average min max 00:15:23.440 PCIE (0000:00:10.0) NSID 1 from core 0: 5256.79 20.53 3041.81 1343.92 11845.72 00:15:23.440 PCIE (0000:00:11.0) NSID 1 from core 0: 5256.79 20.53 3043.11 1365.55 11618.04 00:15:23.440 PCIE (0000:00:13.0) NSID 1 from core 0: 5256.79 20.53 3043.04 1214.12 8530.98 00:15:23.440 PCIE (0000:00:12.0) NSID 1 from core 0: 5256.79 20.53 3042.97 1148.40 8690.42 00:15:23.440 PCIE (0000:00:12.0) NSID 2 from core 0: 5256.79 20.53 3042.89 1070.44 8636.35 00:15:23.440 PCIE (0000:00:12.0) NSID 3 from core 0: 5256.79 20.53 3042.79 1022.02 8424.28 00:15:23.440 ======================================================== 00:15:23.440 Total : 31540.73 123.21 3042.77 1022.02 11845.72 00:15:23.440 00:15:25.342 Initializing NVMe Controllers 00:15:25.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:25.342 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:25.342 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:25.342 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:25.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:25.342 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:25.342 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:25.342 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:25.342 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:25.342 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:25.342 Initialization complete. Launching workers. 00:15:25.342 ======================================================== 00:15:25.342 Latency(us) 00:15:25.342 Device Information : IOPS MiB/s Average min max 00:15:25.342 PCIE (0000:00:10.0) NSID 1 from core 2: 3339.22 13.04 4789.35 1007.39 14357.67 00:15:25.342 PCIE (0000:00:11.0) NSID 1 from core 2: 3339.22 13.04 4790.48 1040.03 14705.93 00:15:25.342 PCIE (0000:00:13.0) NSID 1 from core 2: 3339.22 13.04 4790.39 1032.02 13487.19 00:15:25.342 PCIE (0000:00:12.0) NSID 1 from core 2: 3339.22 13.04 4790.79 1039.91 14584.32 00:15:25.342 PCIE (0000:00:12.0) NSID 2 from core 2: 3339.22 13.04 4790.30 1048.59 13978.00 00:15:25.342 PCIE (0000:00:12.0) NSID 3 from core 2: 3342.42 13.06 4786.30 1042.76 13983.46 00:15:25.342 ======================================================== 00:15:25.342 Total : 20038.53 78.28 4789.60 1007.39 14705.93 00:15:25.342 00:15:25.342 ************************************ 00:15:25.342 END TEST nvme_multi_secondary 00:15:25.342 ************************************ 00:15:25.342 15:27:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65401 00:15:25.342 15:27:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65402 00:15:25.342 00:15:25.342 real 0m10.844s 00:15:25.342 user 0m18.775s 00:15:25.342 sys 0m1.243s 00:15:25.342 15:27:11 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.342 15:27:11 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:15:25.342 15:27:11 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:15:25.342 15:27:11 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:15:25.342 15:27:11 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64325 ]] 00:15:25.342 15:27:11 nvme -- common/autotest_common.sh@1094 -- # kill 64325 00:15:25.342 15:27:11 nvme -- common/autotest_common.sh@1095 -- # wait 64325 00:15:25.342 [2024-11-20 15:27:11.116326] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.116399] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.116429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.116449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.118600] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.118649] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.118665] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.118683] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.120981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.121146] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.121168] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.121186] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.123475] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.123523] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.123540] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 [2024-11-20 15:27:11.123557] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65274) is not found. Dropping the request. 00:15:25.342 15:27:11 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:15:25.601 15:27:11 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:15:25.601 15:27:11 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:25.601 15:27:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.601 15:27:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.601 15:27:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.601 ************************************ 00:15:25.601 START TEST bdev_nvme_reset_stuck_adm_cmd 00:15:25.601 ************************************ 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:25.601 * Looking for test storage... 00:15:25.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:25.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.601 --rc genhtml_branch_coverage=1 00:15:25.601 --rc genhtml_function_coverage=1 00:15:25.601 --rc genhtml_legend=1 00:15:25.601 --rc geninfo_all_blocks=1 00:15:25.601 --rc geninfo_unexecuted_blocks=1 00:15:25.601 00:15:25.601 ' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:25.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.601 --rc genhtml_branch_coverage=1 00:15:25.601 --rc genhtml_function_coverage=1 00:15:25.601 --rc genhtml_legend=1 00:15:25.601 --rc geninfo_all_blocks=1 00:15:25.601 --rc geninfo_unexecuted_blocks=1 00:15:25.601 00:15:25.601 ' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:25.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.601 --rc genhtml_branch_coverage=1 00:15:25.601 --rc genhtml_function_coverage=1 00:15:25.601 --rc genhtml_legend=1 00:15:25.601 --rc geninfo_all_blocks=1 00:15:25.601 --rc geninfo_unexecuted_blocks=1 00:15:25.601 00:15:25.601 ' 00:15:25.601 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:25.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.602 --rc genhtml_branch_coverage=1 00:15:25.602 --rc genhtml_function_coverage=1 00:15:25.602 --rc genhtml_legend=1 00:15:25.602 --rc geninfo_all_blocks=1 00:15:25.602 --rc geninfo_unexecuted_blocks=1 00:15:25.602 00:15:25.602 ' 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:25.602 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65569 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65569 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65569 ']' 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.861 15:27:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:25.861 [2024-11-20 15:27:11.753298] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:15:25.861 [2024-11-20 15:27:11.753477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65569 ] 00:15:26.118 [2024-11-20 15:27:11.981961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.377 [2024-11-20 15:27:12.169654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.377 [2024-11-20 15:27:12.169822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.377 [2024-11-20 15:27:12.169931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.377 [2024-11-20 15:27:12.170199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:27.313 nvme0n1 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_vTOBn.txt 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:27.313 true 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732116433 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65592 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:15:27.313 15:27:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:29.315 [2024-11-20 15:27:15.195883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:29.315 [2024-11-20 15:27:15.196362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:29.315 [2024-11-20 15:27:15.196396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.315 [2024-11-20 15:27:15.196414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.315 [2024-11-20 15:27:15.198434] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:15:29.315 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65592 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65592 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65592 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:29.315 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_vTOBn.txt 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_vTOBn.txt 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65569 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65569 ']' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65569 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65569 00:15:29.574 killing process with pid 65569 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65569' 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65569 00:15:29.574 15:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65569 00:15:32.109 15:27:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:32.109 15:27:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:32.109 ************************************ 00:15:32.109 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:32.109 ************************************ 00:15:32.109 00:15:32.109 real 0m6.582s 00:15:32.109 user 0m22.723s 00:15:32.109 sys 0m0.836s 00:15:32.109 15:27:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.109 15:27:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:32.109 15:27:17 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:32.109 15:27:17 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:32.109 15:27:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.109 15:27:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.109 15:27:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.109 ************************************ 00:15:32.109 START TEST nvme_fio 00:15:32.109 ************************************ 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:32.109 15:27:17 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:32.109 15:27:17 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:32.109 15:27:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:32.109 15:27:17 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:32.109 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:32.109 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:32.109 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:32.109 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:32.109 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:32.109 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:32.109 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:32.368 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:32.368 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:32.935 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:32.935 15:27:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:32.935 15:27:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:32.935 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:32.935 fio-3.35 00:15:32.935 Starting 1 thread 00:15:37.123 00:15:37.123 test: (groupid=0, jobs=1): err= 0: pid=65754: Wed Nov 20 15:27:22 2024 00:15:37.123 read: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec) 00:15:37.123 slat (nsec): min=4425, max=87894, avg=5751.59, stdev=1641.81 00:15:37.123 clat (usec): min=309, max=9533, avg=3438.92, stdev=596.39 00:15:37.123 lat (usec): min=315, max=9587, avg=3444.67, stdev=597.25 00:15:37.123 clat percentiles (usec): 00:15:37.124 | 1.00th=[ 2245], 5.00th=[ 2900], 10.00th=[ 2999], 20.00th=[ 3064], 00:15:37.124 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3261], 00:15:37.124 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:15:37.124 | 99.00th=[ 5014], 99.50th=[ 6259], 99.90th=[ 8094], 99.95th=[ 8291], 00:15:37.124 | 99.99th=[ 9372] 00:15:37.124 bw ( KiB/s): min=73880, max=79400, per=100.00%, avg=75880.00, stdev=3057.84, samples=3 00:15:37.124 iops : min=18470, max=19850, avg=18970.00, stdev=764.46, samples=3 00:15:37.124 write: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec); 0 zone resets 00:15:37.124 slat (nsec): min=4594, max=81553, avg=5935.49, stdev=1589.09 00:15:37.124 clat (usec): min=283, max=9390, avg=3445.74, stdev=595.71 00:15:37.124 lat (usec): min=289, max=9412, avg=3451.67, stdev=596.52 00:15:37.124 clat percentiles (usec): 00:15:37.124 | 1.00th=[ 2278], 5.00th=[ 2900], 10.00th=[ 2999], 20.00th=[ 3064], 00:15:37.124 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3261], 00:15:37.124 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:15:37.124 | 99.00th=[ 5014], 99.50th=[ 6259], 99.90th=[ 8225], 99.95th=[ 8291], 00:15:37.124 | 99.99th=[ 9241] 00:15:37.124 bw ( KiB/s): min=74072, max=79472, per=100.00%, avg=75978.67, stdev=3029.54, samples=3 00:15:37.124 iops : min=18518, max=19868, avg=18994.67, stdev=757.39, samples=3 00:15:37.124 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.05% 00:15:37.124 lat (msec) : 2=0.52%, 4=81.37%, 10=18.03% 00:15:37.124 cpu : usr=99.15%, sys=0.15%, ctx=3, majf=0, minf=607 00:15:37.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:37.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.124 issued rwts: total=37017,37039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.124 00:15:37.124 Run status group 0 (all jobs): 00:15:37.124 READ: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:15:37.124 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:15:37.124 ----------------------------------------------------- 00:15:37.124 Suppressions used: 00:15:37.124 count bytes template 00:15:37.124 1 32 /usr/src/fio/parse.c 00:15:37.124 1 8 libtcmalloc_minimal.so 00:15:37.124 ----------------------------------------------------- 00:15:37.124 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:37.124 15:27:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:37.383 15:27:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:37.383 15:27:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:37.383 15:27:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:37.642 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:37.642 fio-3.35 00:15:37.642 Starting 1 thread 00:15:40.927 00:15:40.927 test: (groupid=0, jobs=1): err= 0: pid=65815: Wed Nov 20 15:27:26 2024 00:15:40.927 read: IOPS=18.9k, BW=73.6MiB/s (77.2MB/s)(147MiB/2001msec) 00:15:40.927 slat (usec): min=4, max=449, avg= 5.59, stdev= 3.30 00:15:40.927 clat (usec): min=239, max=9484, avg=3375.08, stdev=483.97 00:15:40.927 lat (usec): min=244, max=9539, avg=3380.67, stdev=484.52 00:15:40.927 clat percentiles (usec): 00:15:40.927 | 1.00th=[ 2933], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3097], 00:15:40.927 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:15:40.927 | 70.00th=[ 3326], 80.00th=[ 3621], 90.00th=[ 4113], 95.00th=[ 4228], 00:15:40.927 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 8455], 99.95th=[ 8979], 00:15:40.927 | 99.99th=[ 9372] 00:15:40.927 bw ( KiB/s): min=70320, max=80024, per=97.81%, avg=73762.67, stdev=5431.44, samples=3 00:15:40.927 iops : min=17580, max=20006, avg=18440.67, stdev=1357.86, samples=3 00:15:40.927 write: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(147MiB/2001msec); 0 zone resets 00:15:40.927 slat (usec): min=4, max=413, avg= 5.78, stdev= 3.60 00:15:40.927 clat (usec): min=286, max=9400, avg=3378.92, stdev=482.52 00:15:40.927 lat (usec): min=291, max=9420, avg=3384.71, stdev=483.14 00:15:40.927 clat percentiles (usec): 00:15:40.927 | 1.00th=[ 2933], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3097], 00:15:40.927 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:15:40.927 | 70.00th=[ 3326], 80.00th=[ 3654], 90.00th=[ 4113], 95.00th=[ 4228], 00:15:40.927 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 8586], 99.95th=[ 8979], 00:15:40.927 | 99.99th=[ 9241] 00:15:40.927 bw ( KiB/s): min=70392, max=79608, per=97.64%, avg=73682.67, stdev=5141.96, samples=3 00:15:40.927 iops : min=17598, max=19902, avg=18420.67, stdev=1285.49, samples=3 00:15:40.927 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:40.927 lat (msec) : 2=0.06%, 4=84.60%, 10=15.30% 00:15:40.927 cpu : usr=98.25%, sys=0.45%, ctx=14, majf=0, minf=607 00:15:40.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:40.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.927 issued rwts: total=37726,37752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.927 00:15:40.927 Run status group 0 (all jobs): 00:15:40.927 READ: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=147MiB (155MB), run=2001-2001msec 00:15:40.927 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=147MiB (155MB), run=2001-2001msec 00:15:41.494 ----------------------------------------------------- 00:15:41.494 Suppressions used: 00:15:41.494 count bytes template 00:15:41.494 1 32 /usr/src/fio/parse.c 00:15:41.494 1 8 libtcmalloc_minimal.so 00:15:41.494 ----------------------------------------------------- 00:15:41.494 00:15:41.494 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:41.494 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:41.494 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:41.494 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:41.753 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:41.753 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:42.012 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:42.012 15:27:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:42.012 15:27:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:42.272 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:42.272 fio-3.35 00:15:42.272 Starting 1 thread 00:15:46.492 00:15:46.492 test: (groupid=0, jobs=1): err= 0: pid=65881: Wed Nov 20 15:27:31 2024 00:15:46.492 read: IOPS=20.1k, BW=78.5MiB/s (82.4MB/s)(157MiB/2001msec) 00:15:46.492 slat (nsec): min=4145, max=53513, avg=5191.64, stdev=1329.18 00:15:46.492 clat (usec): min=298, max=9325, avg=3169.82, stdev=321.78 00:15:46.492 lat (usec): min=303, max=9378, avg=3175.01, stdev=322.29 00:15:46.492 clat percentiles (usec): 00:15:46.492 | 1.00th=[ 2802], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 3032], 00:15:46.492 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:15:46.492 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3326], 95.00th=[ 4047], 00:15:46.492 | 99.00th=[ 4293], 99.50th=[ 4293], 99.90th=[ 5145], 99.95th=[ 7439], 00:15:46.492 | 99.99th=[ 9110] 00:15:46.492 bw ( KiB/s): min=74336, max=82648, per=99.31%, avg=79874.67, stdev=4796.63, samples=3 00:15:46.492 iops : min=18584, max=20662, avg=19968.67, stdev=1199.16, samples=3 00:15:46.492 write: IOPS=20.1k, BW=78.3MiB/s (82.1MB/s)(157MiB/2001msec); 0 zone resets 00:15:46.492 slat (nsec): min=4323, max=27249, avg=5340.96, stdev=1248.30 00:15:46.492 clat (usec): min=222, max=9209, avg=3176.13, stdev=328.14 00:15:46.492 lat (usec): min=227, max=9231, avg=3181.47, stdev=328.61 00:15:46.492 clat percentiles (usec): 00:15:46.492 | 1.00th=[ 2802], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 3032], 00:15:46.492 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:15:46.492 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3326], 95.00th=[ 4047], 00:15:46.492 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 5800], 99.95th=[ 7701], 00:15:46.492 | 99.99th=[ 8979] 00:15:46.492 bw ( KiB/s): min=74376, max=82816, per=99.62%, avg=79901.33, stdev=4787.49, samples=3 00:15:46.492 iops : min=18594, max=20704, avg=19975.33, stdev=1196.87, samples=3 00:15:46.492 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:46.492 lat (msec) : 2=0.12%, 4=94.14%, 10=5.70% 00:15:46.492 cpu : usr=99.20%, sys=0.20%, ctx=4, majf=0, minf=607 00:15:46.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:46.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.492 issued rwts: total=40235,40121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.492 00:15:46.492 Run status group 0 (all jobs): 00:15:46.492 READ: bw=78.5MiB/s (82.4MB/s), 78.5MiB/s-78.5MiB/s (82.4MB/s-82.4MB/s), io=157MiB (165MB), run=2001-2001msec 00:15:46.492 WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=157MiB (164MB), run=2001-2001msec 00:15:46.492 ----------------------------------------------------- 00:15:46.492 Suppressions used: 00:15:46.492 count bytes template 00:15:46.492 1 32 /usr/src/fio/parse.c 00:15:46.492 1 8 libtcmalloc_minimal.so 00:15:46.492 ----------------------------------------------------- 00:15:46.492 00:15:46.492 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:46.492 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:46.492 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:46.492 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:46.751 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:46.751 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:47.010 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:47.010 15:27:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:47.010 15:27:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:47.269 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:47.269 fio-3.35 00:15:47.269 Starting 1 thread 00:15:51.462 00:15:51.462 test: (groupid=0, jobs=1): err= 0: pid=65947: Wed Nov 20 15:27:37 2024 00:15:51.462 read: IOPS=20.1k, BW=78.6MiB/s (82.5MB/s)(157MiB/2001msec) 00:15:51.462 slat (nsec): min=4344, max=66996, avg=5236.16, stdev=1300.14 00:15:51.462 clat (usec): min=217, max=8949, avg=3165.70, stdev=324.80 00:15:51.462 lat (usec): min=223, max=9016, avg=3170.94, stdev=325.26 00:15:51.462 clat percentiles (usec): 00:15:51.462 | 1.00th=[ 2802], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 3032], 00:15:51.462 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:15:51.462 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3916], 00:15:51.462 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5014], 99.95th=[ 6980], 00:15:51.462 | 99.99th=[ 8717] 00:15:51.462 bw ( KiB/s): min=74200, max=82584, per=99.01%, avg=79720.00, stdev=4781.59, samples=3 00:15:51.462 iops : min=18550, max=20646, avg=19930.00, stdev=1195.40, samples=3 00:15:51.462 write: IOPS=20.1k, BW=78.4MiB/s (82.2MB/s)(157MiB/2001msec); 0 zone resets 00:15:51.462 slat (usec): min=4, max=107, avg= 5.42, stdev= 1.41 00:15:51.462 clat (usec): min=248, max=8781, avg=3171.66, stdev=334.70 00:15:51.462 lat (usec): min=253, max=8803, avg=3177.09, stdev=335.17 00:15:51.462 clat percentiles (usec): 00:15:51.462 | 1.00th=[ 2835], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3032], 00:15:51.462 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:15:51.462 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 4113], 00:15:51.462 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5800], 99.95th=[ 7308], 00:15:51.462 | 99.99th=[ 8356] 00:15:51.462 bw ( KiB/s): min=74264, max=82472, per=99.24%, avg=79709.33, stdev=4715.97, samples=3 00:15:51.462 iops : min=18566, max=20618, avg=19927.33, stdev=1178.99, samples=3 00:15:51.462 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:51.462 lat (msec) : 2=0.06%, 4=94.81%, 10=5.09% 00:15:51.462 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=606 00:15:51.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:51.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.462 issued rwts: total=40279,40179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.462 00:15:51.462 Run status group 0 (all jobs): 00:15:51.462 READ: bw=78.6MiB/s (82.5MB/s), 78.6MiB/s-78.6MiB/s (82.5MB/s-82.5MB/s), io=157MiB (165MB), run=2001-2001msec 00:15:51.462 WRITE: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=157MiB (165MB), run=2001-2001msec 00:15:51.722 ----------------------------------------------------- 00:15:51.722 Suppressions used: 00:15:51.722 count bytes template 00:15:51.722 1 32 /usr/src/fio/parse.c 00:15:51.722 1 8 libtcmalloc_minimal.so 00:15:51.722 ----------------------------------------------------- 00:15:51.722 00:15:51.722 15:27:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:51.722 15:27:37 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:51.722 00:15:51.722 real 0m19.672s 00:15:51.722 user 0m15.021s 00:15:51.722 sys 0m4.942s 00:15:51.722 15:27:37 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.722 15:27:37 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:51.722 ************************************ 00:15:51.722 END TEST nvme_fio 00:15:51.722 ************************************ 00:15:51.722 ************************************ 00:15:51.722 END TEST nvme 00:15:51.722 ************************************ 00:15:51.722 00:15:51.722 real 1m36.033s 00:15:51.722 user 3m46.742s 00:15:51.722 sys 0m24.873s 00:15:51.722 15:27:37 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.722 15:27:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.989 15:27:37 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:51.989 15:27:37 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:51.989 15:27:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:51.989 15:27:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.989 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:15:51.989 ************************************ 00:15:51.989 START TEST nvme_scc 00:15:51.989 ************************************ 00:15:51.989 15:27:37 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:51.989 * Looking for test storage... 00:15:51.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.990 15:27:37 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:51.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.990 --rc genhtml_branch_coverage=1 00:15:51.990 --rc genhtml_function_coverage=1 00:15:51.990 --rc genhtml_legend=1 00:15:51.990 --rc geninfo_all_blocks=1 00:15:51.990 --rc geninfo_unexecuted_blocks=1 00:15:51.990 00:15:51.990 ' 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:51.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.990 --rc genhtml_branch_coverage=1 00:15:51.990 --rc genhtml_function_coverage=1 00:15:51.990 --rc genhtml_legend=1 00:15:51.990 --rc geninfo_all_blocks=1 00:15:51.990 --rc geninfo_unexecuted_blocks=1 00:15:51.990 00:15:51.990 ' 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:51.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.990 --rc genhtml_branch_coverage=1 00:15:51.990 --rc genhtml_function_coverage=1 00:15:51.990 --rc genhtml_legend=1 00:15:51.990 --rc geninfo_all_blocks=1 00:15:51.990 --rc geninfo_unexecuted_blocks=1 00:15:51.990 00:15:51.990 ' 00:15:51.990 15:27:37 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:51.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.990 --rc genhtml_branch_coverage=1 00:15:51.990 --rc genhtml_function_coverage=1 00:15:51.990 --rc genhtml_legend=1 00:15:51.990 --rc geninfo_all_blocks=1 00:15:51.990 --rc geninfo_unexecuted_blocks=1 00:15:51.990 00:15:51.990 ' 00:15:51.990 15:27:37 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.263 15:27:37 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.263 15:27:37 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.263 15:27:37 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.263 15:27:37 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.263 15:27:37 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.263 15:27:37 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.263 15:27:37 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.263 15:27:37 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:52.263 15:27:37 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:52.263 15:27:37 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:52.263 15:27:37 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.263 15:27:37 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:52.263 15:27:37 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:52.263 15:27:37 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:52.263 15:27:37 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:52.780 Waiting for block devices as requested 00:15:52.780 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:53.039 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:53.039 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:53.039 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:58.313 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:58.313 15:27:44 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:58.313 15:27:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:58.313 15:27:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:58.313 15:27:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:58.313 15:27:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.313 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.314 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.315 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:58.316 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:58.317 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.318 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.319 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.588 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:58.589 15:27:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:58.589 15:27:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:58.589 15:27:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:58.589 15:27:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:58.589 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:58.590 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.591 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:58.592 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:58.593 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.594 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:58.595 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.596 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:58.597 15:27:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:58.597 15:27:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:58.597 15:27:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:58.597 15:27:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:58.597 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.598 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.599 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.600 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.601 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.602 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.603 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.604 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.870 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.871 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.872 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:58.873 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.874 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.875 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.876 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:58.877 15:27:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:58.877 15:27:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:58.877 15:27:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:58.877 15:27:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:58.877 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:58.878 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.879 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:58.880 15:27:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:58.880 15:27:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:58.881 15:27:44 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:58.881 15:27:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:58.881 15:27:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:58.881 15:27:44 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:00.102 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:00.103 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:00.103 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:00.362 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:00.362 15:27:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:00.362 15:27:46 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:00.362 15:27:46 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.362 15:27:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:00.362 ************************************ 00:16:00.362 START TEST nvme_simple_copy 00:16:00.362 ************************************ 00:16:00.362 15:27:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:00.621 Initializing NVMe Controllers 00:16:00.621 Attaching to 0000:00:10.0 00:16:00.621 Controller supports SCC. Attached to 0000:00:10.0 00:16:00.621 Namespace ID: 1 size: 6GB 00:16:00.621 Initialization complete. 00:16:00.621 00:16:00.621 Controller QEMU NVMe Ctrl (12340 ) 00:16:00.621 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:00.621 Namespace Block Size:4096 00:16:00.621 Writing LBAs 0 to 63 with Random Data 00:16:00.621 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:00.621 LBAs matching Written Data: 64 00:16:00.621 00:16:00.621 real 0m0.367s 00:16:00.621 user 0m0.156s 00:16:00.621 sys 0m0.108s 00:16:00.621 15:27:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.621 15:27:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:00.621 ************************************ 00:16:00.621 END TEST nvme_simple_copy 00:16:00.621 ************************************ 00:16:00.880 00:16:00.880 real 0m8.887s 00:16:00.880 user 0m1.628s 00:16:00.880 sys 0m2.260s 00:16:00.880 15:27:46 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.880 15:27:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:00.880 ************************************ 00:16:00.880 END TEST nvme_scc 00:16:00.880 ************************************ 00:16:00.880 15:27:46 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:00.880 15:27:46 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:00.880 15:27:46 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:00.880 15:27:46 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:00.880 15:27:46 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:00.880 15:27:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:00.880 15:27:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.880 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:16:00.880 ************************************ 00:16:00.880 START TEST nvme_fdp 00:16:00.880 ************************************ 00:16:00.880 15:27:46 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:16:00.880 * Looking for test storage... 00:16:00.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:00.880 15:27:46 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:00.880 15:27:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:16:00.880 15:27:46 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.139 --rc genhtml_branch_coverage=1 00:16:01.139 --rc genhtml_function_coverage=1 00:16:01.139 --rc genhtml_legend=1 00:16:01.139 --rc geninfo_all_blocks=1 00:16:01.139 --rc geninfo_unexecuted_blocks=1 00:16:01.139 00:16:01.139 ' 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.139 --rc genhtml_branch_coverage=1 00:16:01.139 --rc genhtml_function_coverage=1 00:16:01.139 --rc genhtml_legend=1 00:16:01.139 --rc geninfo_all_blocks=1 00:16:01.139 --rc geninfo_unexecuted_blocks=1 00:16:01.139 00:16:01.139 ' 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.139 --rc genhtml_branch_coverage=1 00:16:01.139 --rc genhtml_function_coverage=1 00:16:01.139 --rc genhtml_legend=1 00:16:01.139 --rc geninfo_all_blocks=1 00:16:01.139 --rc geninfo_unexecuted_blocks=1 00:16:01.139 00:16:01.139 ' 00:16:01.139 15:27:46 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.139 --rc genhtml_branch_coverage=1 00:16:01.139 --rc genhtml_function_coverage=1 00:16:01.139 --rc genhtml_legend=1 00:16:01.139 --rc geninfo_all_blocks=1 00:16:01.139 --rc geninfo_unexecuted_blocks=1 00:16:01.139 00:16:01.139 ' 00:16:01.139 15:27:46 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.139 15:27:46 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.139 15:27:46 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.139 15:27:46 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.139 15:27:46 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.139 15:27:46 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:01.139 15:27:46 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:01.139 15:27:46 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:01.139 15:27:46 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.139 15:27:46 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:01.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:01.657 Waiting for block devices as requested 00:16:01.657 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:01.916 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:01.916 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.175 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:07.454 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:07.454 15:27:53 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:07.454 15:27:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:07.454 15:27:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:07.454 15:27:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:07.454 15:27:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.454 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.455 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.456 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:07.457 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.458 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:07.459 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:07.460 15:27:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:07.460 15:27:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:07.460 15:27:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:07.460 15:27:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.460 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.461 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:07.462 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.463 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:07.464 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:07.465 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:07.466 15:27:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:07.466 15:27:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:07.466 15:27:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:07.466 15:27:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.466 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:07.467 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:07.468 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:07.469 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:07.738 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.739 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.740 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.741 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.742 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.743 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.744 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:07.745 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:07.746 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:07.747 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:07.748 15:27:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:07.748 15:27:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:07.748 15:27:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:07.748 15:27:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.748 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:07.749 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:07.750 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:07.751 15:27:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:08.064 15:27:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:08.064 15:27:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:08.065 15:27:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:08.065 15:27:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:08.065 15:27:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:08.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:09.204 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:09.204 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:09.204 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:09.204 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:09.464 15:27:55 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:09.464 15:27:55 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:09.464 15:27:55 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.464 15:27:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:09.464 ************************************ 00:16:09.464 START TEST nvme_flexible_data_placement 00:16:09.464 ************************************ 00:16:09.464 15:27:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:09.725 Initializing NVMe Controllers 00:16:09.725 Attaching to 0000:00:13.0 00:16:09.725 Controller supports FDP Attached to 0000:00:13.0 00:16:09.725 Namespace ID: 1 Endurance Group ID: 1 00:16:09.725 Initialization complete. 00:16:09.725 00:16:09.725 ================================== 00:16:09.725 == FDP tests for Namespace: #01 == 00:16:09.725 ================================== 00:16:09.725 00:16:09.725 Get Feature: FDP: 00:16:09.725 ================= 00:16:09.725 Enabled: Yes 00:16:09.725 FDP configuration Index: 0 00:16:09.725 00:16:09.725 FDP configurations log page 00:16:09.725 =========================== 00:16:09.725 Number of FDP configurations: 1 00:16:09.725 Version: 0 00:16:09.725 Size: 112 00:16:09.725 FDP Configuration Descriptor: 0 00:16:09.725 Descriptor Size: 96 00:16:09.725 Reclaim Group Identifier format: 2 00:16:09.725 FDP Volatile Write Cache: Not Present 00:16:09.725 FDP Configuration: Valid 00:16:09.725 Vendor Specific Size: 0 00:16:09.725 Number of Reclaim Groups: 2 00:16:09.725 Number of Recalim Unit Handles: 8 00:16:09.725 Max Placement Identifiers: 128 00:16:09.725 Number of Namespaces Suppprted: 256 00:16:09.725 Reclaim unit Nominal Size: 6000000 bytes 00:16:09.725 Estimated Reclaim Unit Time Limit: Not Reported 00:16:09.725 RUH Desc #000: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #001: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #002: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #003: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #004: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #005: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #006: RUH Type: Initially Isolated 00:16:09.725 RUH Desc #007: RUH Type: Initially Isolated 00:16:09.725 00:16:09.725 FDP reclaim unit handle usage log page 00:16:09.725 ====================================== 00:16:09.725 Number of Reclaim Unit Handles: 8 00:16:09.725 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:09.725 RUH Usage Desc #001: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #002: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #003: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #004: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #005: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #006: RUH Attributes: Unused 00:16:09.725 RUH Usage Desc #007: RUH Attributes: Unused 00:16:09.725 00:16:09.725 FDP statistics log page 00:16:09.725 ======================= 00:16:09.725 Host bytes with metadata written: 852942848 00:16:09.725 Media bytes with metadata written: 853028864 00:16:09.725 Media bytes erased: 0 00:16:09.725 00:16:09.725 FDP Reclaim unit handle status 00:16:09.725 ============================== 00:16:09.725 Number of RUHS descriptors: 2 00:16:09.725 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003292 00:16:09.725 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:09.725 00:16:09.725 FDP write on placement id: 0 success 00:16:09.725 00:16:09.725 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:09.725 00:16:09.725 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:09.725 00:16:09.725 Get Feature: FDP Events for Placement handle: #0 00:16:09.725 ======================== 00:16:09.725 Number of FDP Events: 6 00:16:09.725 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:09.725 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:09.725 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:09.725 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:09.725 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:09.725 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:09.725 00:16:09.725 FDP events log page 00:16:09.725 =================== 00:16:09.725 Number of FDP events: 1 00:16:09.725 FDP Event #0: 00:16:09.725 Event Type: RU Not Written to Capacity 00:16:09.725 Placement Identifier: Valid 00:16:09.725 NSID: Valid 00:16:09.725 Location: Valid 00:16:09.725 Placement Identifier: 0 00:16:09.725 Event Timestamp: b 00:16:09.725 Namespace Identifier: 1 00:16:09.725 Reclaim Group Identifier: 0 00:16:09.725 Reclaim Unit Handle Identifier: 0 00:16:09.725 00:16:09.725 FDP test passed 00:16:09.725 00:16:09.725 real 0m0.348s 00:16:09.725 user 0m0.128s 00:16:09.725 sys 0m0.118s 00:16:09.725 15:27:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.725 15:27:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:09.725 ************************************ 00:16:09.725 END TEST nvme_flexible_data_placement 00:16:09.725 ************************************ 00:16:09.725 00:16:09.725 real 0m8.924s 00:16:09.725 user 0m1.604s 00:16:09.725 sys 0m2.369s 00:16:09.725 15:27:55 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.725 15:27:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:09.725 ************************************ 00:16:09.725 END TEST nvme_fdp 00:16:09.725 ************************************ 00:16:09.725 15:27:55 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:09.725 15:27:55 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:09.725 15:27:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.725 15:27:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.725 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:09.725 ************************************ 00:16:09.725 START TEST nvme_rpc 00:16:09.725 ************************************ 00:16:09.725 15:27:55 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:09.986 * Looking for test storage... 00:16:09.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.986 15:27:55 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.986 --rc genhtml_branch_coverage=1 00:16:09.986 --rc genhtml_function_coverage=1 00:16:09.986 --rc genhtml_legend=1 00:16:09.986 --rc geninfo_all_blocks=1 00:16:09.986 --rc geninfo_unexecuted_blocks=1 00:16:09.986 00:16:09.986 ' 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.986 --rc genhtml_branch_coverage=1 00:16:09.986 --rc genhtml_function_coverage=1 00:16:09.986 --rc genhtml_legend=1 00:16:09.986 --rc geninfo_all_blocks=1 00:16:09.986 --rc geninfo_unexecuted_blocks=1 00:16:09.986 00:16:09.986 ' 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.986 --rc genhtml_branch_coverage=1 00:16:09.986 --rc genhtml_function_coverage=1 00:16:09.986 --rc genhtml_legend=1 00:16:09.986 --rc geninfo_all_blocks=1 00:16:09.986 --rc geninfo_unexecuted_blocks=1 00:16:09.986 00:16:09.986 ' 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:09.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.986 --rc genhtml_branch_coverage=1 00:16:09.986 --rc genhtml_function_coverage=1 00:16:09.986 --rc genhtml_legend=1 00:16:09.986 --rc geninfo_all_blocks=1 00:16:09.986 --rc geninfo_unexecuted_blocks=1 00:16:09.986 00:16:09.986 ' 00:16:09.986 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.986 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.986 15:27:55 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:10.246 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:10.246 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67352 00:16:10.246 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:10.246 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:10.246 15:27:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67352 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67352 ']' 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.246 15:27:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 [2024-11-20 15:27:56.104485] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:16:10.246 [2024-11-20 15:27:56.104675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67352 ] 00:16:10.506 [2024-11-20 15:27:56.309471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.767 [2024-11-20 15:27:56.487081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.767 [2024-11-20 15:27:56.487112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.709 15:27:57 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.709 15:27:57 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:11.709 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:11.967 Nvme0n1 00:16:11.967 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:11.967 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:12.225 request: 00:16:12.225 { 00:16:12.225 "bdev_name": "Nvme0n1", 00:16:12.225 "filename": "non_existing_file", 00:16:12.225 "method": "bdev_nvme_apply_firmware", 00:16:12.225 "req_id": 1 00:16:12.225 } 00:16:12.225 Got JSON-RPC error response 00:16:12.225 response: 00:16:12.225 { 00:16:12.225 "code": -32603, 00:16:12.225 "message": "open file failed." 00:16:12.225 } 00:16:12.225 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:12.225 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:12.225 15:27:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:12.484 15:27:58 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:12.484 15:27:58 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67352 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67352 ']' 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67352 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67352 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.484 killing process with pid 67352 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67352' 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67352 00:16:12.484 15:27:58 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67352 00:16:15.770 00:16:15.770 real 0m5.318s 00:16:15.770 user 0m9.886s 00:16:15.770 sys 0m0.820s 00:16:15.770 15:28:00 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.770 15:28:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.770 ************************************ 00:16:15.770 END TEST nvme_rpc 00:16:15.770 ************************************ 00:16:15.770 15:28:01 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:15.770 15:28:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.770 15:28:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.770 15:28:01 -- common/autotest_common.sh@10 -- # set +x 00:16:15.770 ************************************ 00:16:15.770 START TEST nvme_rpc_timeouts 00:16:15.770 ************************************ 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:15.770 * Looking for test storage... 00:16:15.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.770 15:28:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:15.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.770 --rc genhtml_branch_coverage=1 00:16:15.770 --rc genhtml_function_coverage=1 00:16:15.770 --rc genhtml_legend=1 00:16:15.770 --rc geninfo_all_blocks=1 00:16:15.770 --rc geninfo_unexecuted_blocks=1 00:16:15.770 00:16:15.770 ' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:15.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.770 --rc genhtml_branch_coverage=1 00:16:15.770 --rc genhtml_function_coverage=1 00:16:15.770 --rc genhtml_legend=1 00:16:15.770 --rc geninfo_all_blocks=1 00:16:15.770 --rc geninfo_unexecuted_blocks=1 00:16:15.770 00:16:15.770 ' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:15.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.770 --rc genhtml_branch_coverage=1 00:16:15.770 --rc genhtml_function_coverage=1 00:16:15.770 --rc genhtml_legend=1 00:16:15.770 --rc geninfo_all_blocks=1 00:16:15.770 --rc geninfo_unexecuted_blocks=1 00:16:15.770 00:16:15.770 ' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:15.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.770 --rc genhtml_branch_coverage=1 00:16:15.770 --rc genhtml_function_coverage=1 00:16:15.770 --rc genhtml_legend=1 00:16:15.770 --rc geninfo_all_blocks=1 00:16:15.770 --rc geninfo_unexecuted_blocks=1 00:16:15.770 00:16:15.770 ' 00:16:15.770 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.770 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67439 00:16:15.770 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67439 00:16:15.770 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67471 00:16:15.771 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:15.771 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:15.771 15:28:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67471 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67471 ']' 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.771 15:28:01 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:15.771 [2024-11-20 15:28:01.412395] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:16:15.771 [2024-11-20 15:28:01.412879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67471 ] 00:16:15.771 [2024-11-20 15:28:01.619129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:16.030 [2024-11-20 15:28:01.779693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.030 [2024-11-20 15:28:01.779718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.016 15:28:02 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.016 15:28:02 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:16:17.016 15:28:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:17.016 Checking default timeout settings: 00:16:17.016 15:28:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:17.275 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:17.275 Making settings changes with rpc: 00:16:17.275 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:17.535 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:17.535 Check default vs. modified settings: 00:16:17.535 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67439 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67439 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.795 Setting action_on_timeout is changed as expected. 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67439 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:17.795 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67439 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.796 Setting timeout_us is changed as expected. 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67439 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67439 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:17.796 Setting timeout_admin_us is changed as expected. 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67439 /tmp/settings_modified_67439 00:16:17.796 15:28:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67471 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67471 ']' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67471 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67471 00:16:17.796 killing process with pid 67471 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67471' 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67471 00:16:17.796 15:28:03 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67471 00:16:21.084 RPC TIMEOUT SETTING TEST PASSED. 00:16:21.084 15:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:21.084 ************************************ 00:16:21.084 END TEST nvme_rpc_timeouts 00:16:21.084 ************************************ 00:16:21.084 00:16:21.084 real 0m5.580s 00:16:21.084 user 0m10.560s 00:16:21.084 sys 0m0.757s 00:16:21.084 15:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.084 15:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:21.084 15:28:06 -- spdk/autotest.sh@239 -- # uname -s 00:16:21.084 15:28:06 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:16:21.084 15:28:06 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:21.084 15:28:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:21.084 15:28:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.084 15:28:06 -- common/autotest_common.sh@10 -- # set +x 00:16:21.084 ************************************ 00:16:21.084 START TEST sw_hotplug 00:16:21.084 ************************************ 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:21.084 * Looking for test storage... 00:16:21.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.084 15:28:06 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.084 --rc genhtml_branch_coverage=1 00:16:21.084 --rc genhtml_function_coverage=1 00:16:21.084 --rc genhtml_legend=1 00:16:21.084 --rc geninfo_all_blocks=1 00:16:21.084 --rc geninfo_unexecuted_blocks=1 00:16:21.084 00:16:21.084 ' 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.084 --rc genhtml_branch_coverage=1 00:16:21.084 --rc genhtml_function_coverage=1 00:16:21.084 --rc genhtml_legend=1 00:16:21.084 --rc geninfo_all_blocks=1 00:16:21.084 --rc geninfo_unexecuted_blocks=1 00:16:21.084 00:16:21.084 ' 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.084 --rc genhtml_branch_coverage=1 00:16:21.084 --rc genhtml_function_coverage=1 00:16:21.084 --rc genhtml_legend=1 00:16:21.084 --rc geninfo_all_blocks=1 00:16:21.084 --rc geninfo_unexecuted_blocks=1 00:16:21.084 00:16:21.084 ' 00:16:21.084 15:28:06 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.084 --rc genhtml_branch_coverage=1 00:16:21.084 --rc genhtml_function_coverage=1 00:16:21.084 --rc genhtml_legend=1 00:16:21.084 --rc geninfo_all_blocks=1 00:16:21.084 --rc geninfo_unexecuted_blocks=1 00:16:21.084 00:16:21.084 ' 00:16:21.084 15:28:06 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:21.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:21.655 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.655 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.655 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.655 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:21.655 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:16:21.655 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:16:21.655 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:16:21.655 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@233 -- # local class 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:21.655 15:28:07 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:21.656 15:28:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:16:21.657 15:28:07 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:21.657 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:16:21.657 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:16:21.657 15:28:07 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:22.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.484 Waiting for block devices as requested 00:16:22.484 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:22.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:22.741 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:22.741 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:28.028 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:28.028 15:28:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:16:28.028 15:28:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:28.596 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:16:28.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:28.596 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:16:28.855 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:16:29.114 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:29.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68363 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:29.373 15:28:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:16:29.373 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:29.374 15:28:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:29.632 Initializing NVMe Controllers 00:16:29.632 Attaching to 0000:00:10.0 00:16:29.632 Attaching to 0000:00:11.0 00:16:29.632 Attached to 0000:00:10.0 00:16:29.632 Attached to 0000:00:11.0 00:16:29.632 Initialization complete. Starting I/O... 00:16:29.632 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:16:29.632 QEMU NVMe Ctrl (12341 ): 1 I/Os completed (+1) 00:16:29.632 00:16:30.570 QEMU NVMe Ctrl (12340 ): 1148 I/Os completed (+1148) 00:16:30.570 QEMU NVMe Ctrl (12341 ): 1149 I/Os completed (+1148) 00:16:30.570 00:16:31.948 QEMU NVMe Ctrl (12340 ): 2740 I/Os completed (+1592) 00:16:31.948 QEMU NVMe Ctrl (12341 ): 2745 I/Os completed (+1596) 00:16:31.948 00:16:32.884 QEMU NVMe Ctrl (12340 ): 4552 I/Os completed (+1812) 00:16:32.884 QEMU NVMe Ctrl (12341 ): 4565 I/Os completed (+1820) 00:16:32.884 00:16:33.825 QEMU NVMe Ctrl (12340 ): 6076 I/Os completed (+1524) 00:16:33.825 QEMU NVMe Ctrl (12341 ): 6091 I/Os completed (+1526) 00:16:33.825 00:16:34.776 QEMU NVMe Ctrl (12340 ): 7460 I/Os completed (+1384) 00:16:34.776 QEMU NVMe Ctrl (12341 ): 7504 I/Os completed (+1413) 00:16:34.776 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:35.344 [2024-11-20 15:28:21.229272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:35.344 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:35.344 [2024-11-20 15:28:21.231568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.231758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.231823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.231937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:35.344 [2024-11-20 15:28:21.235627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.235809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.235868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.235986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:35.344 [2024-11-20 15:28:21.268625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:35.344 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:35.344 [2024-11-20 15:28:21.270782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.270967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.271007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.271030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:35.344 [2024-11-20 15:28:21.274087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.274128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.274152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 [2024-11-20 15:28:21.274172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.344 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:35.344 EAL: Scan for (pci) bus failed. 00:16:35.344 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:35.603 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:35.603 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:35.603 Attaching to 0000:00:10.0 00:16:35.603 Attached to 0000:00:10.0 00:16:35.862 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:35.862 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:35.862 15:28:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:35.862 Attaching to 0000:00:11.0 00:16:35.862 Attached to 0000:00:11.0 00:16:36.799 QEMU NVMe Ctrl (12340 ): 1524 I/Os completed (+1524) 00:16:36.799 QEMU NVMe Ctrl (12341 ): 1340 I/Os completed (+1340) 00:16:36.799 00:16:37.736 QEMU NVMe Ctrl (12340 ): 3262 I/Os completed (+1738) 00:16:37.736 QEMU NVMe Ctrl (12341 ): 3105 I/Os completed (+1765) 00:16:37.736 00:16:38.672 QEMU NVMe Ctrl (12340 ): 4868 I/Os completed (+1606) 00:16:38.672 QEMU NVMe Ctrl (12341 ): 4715 I/Os completed (+1610) 00:16:38.672 00:16:39.609 QEMU NVMe Ctrl (12340 ): 6833 I/Os completed (+1965) 00:16:39.609 QEMU NVMe Ctrl (12341 ): 6683 I/Os completed (+1968) 00:16:39.609 00:16:40.547 QEMU NVMe Ctrl (12340 ): 8813 I/Os completed (+1980) 00:16:40.547 QEMU NVMe Ctrl (12341 ): 8672 I/Os completed (+1989) 00:16:40.547 00:16:41.924 QEMU NVMe Ctrl (12340 ): 10781 I/Os completed (+1968) 00:16:41.924 QEMU NVMe Ctrl (12341 ): 10655 I/Os completed (+1983) 00:16:41.924 00:16:42.859 QEMU NVMe Ctrl (12340 ): 12757 I/Os completed (+1976) 00:16:42.859 QEMU NVMe Ctrl (12341 ): 12641 I/Os completed (+1986) 00:16:42.859 00:16:43.796 QEMU NVMe Ctrl (12340 ): 14745 I/Os completed (+1988) 00:16:43.796 QEMU NVMe Ctrl (12341 ): 14647 I/Os completed (+2006) 00:16:43.796 00:16:44.731 QEMU NVMe Ctrl (12340 ): 16725 I/Os completed (+1980) 00:16:44.731 QEMU NVMe Ctrl (12341 ): 16631 I/Os completed (+1984) 00:16:44.731 00:16:45.668 QEMU NVMe Ctrl (12340 ): 18629 I/Os completed (+1904) 00:16:45.668 QEMU NVMe Ctrl (12341 ): 18545 I/Os completed (+1914) 00:16:45.668 00:16:46.607 QEMU NVMe Ctrl (12340 ): 20322 I/Os completed (+1693) 00:16:46.607 QEMU NVMe Ctrl (12341 ): 20301 I/Os completed (+1756) 00:16:46.607 00:16:47.544 QEMU NVMe Ctrl (12340 ): 21815 I/Os completed (+1493) 00:16:47.544 QEMU NVMe Ctrl (12341 ): 21805 I/Os completed (+1504) 00:16:47.544 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:47.803 [2024-11-20 15:28:33.638546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:47.803 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:47.803 [2024-11-20 15:28:33.641387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.641610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.641808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.641849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:47.803 [2024-11-20 15:28:33.645610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.645691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.645725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.645753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:47.803 [2024-11-20 15:28:33.677743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:47.803 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:47.803 [2024-11-20 15:28:33.680034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.680094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.680126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.680150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:47.803 [2024-11-20 15:28:33.683595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.683663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.683692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 [2024-11-20 15:28:33.683720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:47.803 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:16:47.803 EAL: Scan for (pci) bus failed. 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:47.803 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:48.071 15:28:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:48.071 Attaching to 0000:00:10.0 00:16:48.071 Attached to 0000:00:10.0 00:16:48.346 15:28:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:48.346 15:28:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:48.346 15:28:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:48.346 Attaching to 0000:00:11.0 00:16:48.346 Attached to 0000:00:11.0 00:16:48.604 QEMU NVMe Ctrl (12340 ): 884 I/Os completed (+884) 00:16:48.604 QEMU NVMe Ctrl (12341 ): 702 I/Os completed (+702) 00:16:48.604 00:16:49.982 QEMU NVMe Ctrl (12340 ): 2508 I/Os completed (+1624) 00:16:49.982 QEMU NVMe Ctrl (12341 ): 2333 I/Os completed (+1631) 00:16:49.982 00:16:50.551 QEMU NVMe Ctrl (12340 ): 3944 I/Os completed (+1436) 00:16:50.551 QEMU NVMe Ctrl (12341 ): 3772 I/Os completed (+1439) 00:16:50.551 00:16:51.930 QEMU NVMe Ctrl (12340 ): 5728 I/Os completed (+1784) 00:16:51.930 QEMU NVMe Ctrl (12341 ): 5556 I/Os completed (+1784) 00:16:51.930 00:16:52.869 QEMU NVMe Ctrl (12340 ): 7548 I/Os completed (+1820) 00:16:52.869 QEMU NVMe Ctrl (12341 ): 7376 I/Os completed (+1820) 00:16:52.869 00:16:53.814 QEMU NVMe Ctrl (12340 ): 8972 I/Os completed (+1424) 00:16:53.814 QEMU NVMe Ctrl (12341 ): 8808 I/Os completed (+1432) 00:16:53.814 00:16:54.750 QEMU NVMe Ctrl (12340 ): 10384 I/Os completed (+1412) 00:16:54.750 QEMU NVMe Ctrl (12341 ): 10224 I/Os completed (+1416) 00:16:54.750 00:16:55.686 QEMU NVMe Ctrl (12340 ): 11796 I/Os completed (+1412) 00:16:55.686 QEMU NVMe Ctrl (12341 ): 11649 I/Os completed (+1425) 00:16:55.686 00:16:56.622 QEMU NVMe Ctrl (12340 ): 13332 I/Os completed (+1536) 00:16:56.622 QEMU NVMe Ctrl (12341 ): 13188 I/Os completed (+1539) 00:16:56.622 00:16:57.560 QEMU NVMe Ctrl (12340 ): 14912 I/Os completed (+1580) 00:16:57.560 QEMU NVMe Ctrl (12341 ): 14773 I/Os completed (+1585) 00:16:57.560 00:16:58.937 QEMU NVMe Ctrl (12340 ): 16812 I/Os completed (+1900) 00:16:58.937 QEMU NVMe Ctrl (12341 ): 16673 I/Os completed (+1900) 00:16:58.937 00:16:59.873 QEMU NVMe Ctrl (12340 ): 18836 I/Os completed (+2024) 00:16:59.873 QEMU NVMe Ctrl (12341 ): 18697 I/Os completed (+2024) 00:16:59.873 00:17:00.132 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:00.132 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:00.132 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:00.132 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:00.132 [2024-11-20 15:28:46.068950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:00.132 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:00.132 [2024-11-20 15:28:46.073513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.073716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.073828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.074084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:00.132 [2024-11-20 15:28:46.080409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.080671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.080727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.132 [2024-11-20 15:28:46.080773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 EAL: Cannot open sysfs resource 00:17:00.391 EAL: pci_scan_one(): cannot parse resource 00:17:00.391 EAL: Scan for (pci) bus failed. 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:00.391 [2024-11-20 15:28:46.118830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:00.391 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:00.391 [2024-11-20 15:28:46.122751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.122850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.122906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.122952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:00.391 [2024-11-20 15:28:46.128379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.128459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.128513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 [2024-11-20 15:28:46.128554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:00.391 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:00.391 EAL: Scan for (pci) bus failed. 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:00.391 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:00.649 Attaching to 0000:00:10.0 00:17:00.649 Attached to 0000:00:10.0 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:00.649 QEMU NVMe Ctrl (12340 ): 124 I/Os completed (+124) 00:17:00.649 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:00.649 15:28:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:00.649 Attaching to 0000:00:11.0 00:17:00.649 Attached to 0000:00:11.0 00:17:00.649 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:00.649 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:00.649 [2024-11-20 15:28:46.533724] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:12.859 15:28:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:12.859 15:28:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:12.859 15:28:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.31 00:17:12.859 15:28:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.31 00:17:12.859 15:28:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:12.859 15:28:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.31 00:17:12.859 15:28:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.31 2 00:17:12.859 remove_attach_helper took 43.31s to complete (handling 2 nvme drive(s)) 15:28:58 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68363 00:17:19.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.425 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68363) - No such process 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68363 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68901 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68901 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68901 ']' 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.425 15:29:04 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.425 15:29:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.425 [2024-11-20 15:29:04.685391] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:17:19.425 [2024-11-20 15:29:04.685882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68901 ] 00:17:19.425 [2024-11-20 15:29:04.862257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.425 [2024-11-20 15:29:04.981630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:20.362 15:29:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:20.362 15:29:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:26.931 15:29:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:26.931 15:29:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:26.931 15:29:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:26.931 [2024-11-20 15:29:12.063313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:26.931 [2024-11-20 15:29:12.065982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.066156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.066187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.066218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.066231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.066246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.066269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.066283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.066295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.066317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.066328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.066343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:26.931 [2024-11-20 15:29:12.463329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:26.931 [2024-11-20 15:29:12.466007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.466055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.466102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.466116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.466129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.466145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.466156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.466171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 [2024-11-20 15:29:12.466185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:26.931 [2024-11-20 15:29:12.466199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.931 [2024-11-20 15:29:12.466211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:26.931 15:29:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:26.931 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:27.191 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:27.191 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:27.191 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:27.191 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:27.191 15:29:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:27.191 15:29:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:27.191 15:29:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:39.428 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:39.429 [2024-11-20 15:29:25.163615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:39.429 [2024-11-20 15:29:25.166644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.429 [2024-11-20 15:29:25.166714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.429 [2024-11-20 15:29:25.166735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.429 [2024-11-20 15:29:25.166768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.429 [2024-11-20 15:29:25.166782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.429 [2024-11-20 15:29:25.166800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.429 [2024-11-20 15:29:25.166815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.429 [2024-11-20 15:29:25.166832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.429 [2024-11-20 15:29:25.166846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.429 [2024-11-20 15:29:25.166864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.429 [2024-11-20 15:29:25.166878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.429 [2024-11-20 15:29:25.166895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.429 15:29:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:39.429 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:39.687 [2024-11-20 15:29:25.563627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:39.687 [2024-11-20 15:29:25.566206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.687 [2024-11-20 15:29:25.566255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.687 [2024-11-20 15:29:25.566288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.687 [2024-11-20 15:29:25.566314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.688 [2024-11-20 15:29:25.566329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.688 [2024-11-20 15:29:25.566342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.688 [2024-11-20 15:29:25.566358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.688 [2024-11-20 15:29:25.566370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.688 [2024-11-20 15:29:25.566385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.688 [2024-11-20 15:29:25.566397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:39.688 [2024-11-20 15:29:25.566411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.688 [2024-11-20 15:29:25.566423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:39.946 15:29:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.946 15:29:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:39.946 15:29:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:39.946 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:40.204 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:40.204 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:40.204 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:40.204 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:40.204 15:29:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:40.204 15:29:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:40.204 15:29:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:40.204 15:29:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:52.406 [2024-11-20 15:29:38.163897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:52.406 [2024-11-20 15:29:38.166855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.406 [2024-11-20 15:29:38.166905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.406 [2024-11-20 15:29:38.166923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.406 [2024-11-20 15:29:38.166952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.406 [2024-11-20 15:29:38.166965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.406 [2024-11-20 15:29:38.166982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.406 [2024-11-20 15:29:38.166996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.406 [2024-11-20 15:29:38.167009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.406 [2024-11-20 15:29:38.167021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.406 [2024-11-20 15:29:38.167037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.406 [2024-11-20 15:29:38.167049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.406 [2024-11-20 15:29:38.167063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 15:29:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:52.406 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:52.665 [2024-11-20 15:29:38.563912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:52.665 [2024-11-20 15:29:38.566437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.665 [2024-11-20 15:29:38.566483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.665 [2024-11-20 15:29:38.566506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.665 [2024-11-20 15:29:38.566530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.665 [2024-11-20 15:29:38.566547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.665 [2024-11-20 15:29:38.566560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.665 [2024-11-20 15:29:38.566592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.665 [2024-11-20 15:29:38.566604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.665 [2024-11-20 15:29:38.566627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.665 [2024-11-20 15:29:38.566641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.665 [2024-11-20 15:29:38.566657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.665 [2024-11-20 15:29:38.566670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.972 15:29:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.972 15:29:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.972 15:29:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:52.972 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:53.239 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:53.239 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:53.239 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:53.239 15:29:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:53.239 15:29:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:18:05.446 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:05.446 15:29:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:05.446 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:12.010 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:12.010 15:29:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.010 15:29:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:12.010 [2024-11-20 15:29:57.315950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:12.010 [2024-11-20 15:29:57.317980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.010 [2024-11-20 15:29:57.318028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.318046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.318073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.318085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.318100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.318131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.318146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.318159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.318176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.318188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.318210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 15:29:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:12.011 [2024-11-20 15:29:57.815995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:12.011 [2024-11-20 15:29:57.818462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.818517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.818538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.818563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.818592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.818605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.818622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.818634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.818651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 [2024-11-20 15:29:57.818664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:12.011 [2024-11-20 15:29:57.818678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.011 [2024-11-20 15:29:57.818690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:12.011 15:29:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.011 15:29:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:12.011 15:29:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:12.011 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:12.270 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:12.528 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:12.528 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:12.528 15:29:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:24.731 [2024-11-20 15:30:10.316284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:24.731 [2024-11-20 15:30:10.318987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.731 [2024-11-20 15:30:10.319053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.731 [2024-11-20 15:30:10.319079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.731 [2024-11-20 15:30:10.319118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.731 [2024-11-20 15:30:10.319137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.731 [2024-11-20 15:30:10.319159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.731 [2024-11-20 15:30:10.319179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.731 [2024-11-20 15:30:10.319199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.731 [2024-11-20 15:30:10.319218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.731 [2024-11-20 15:30:10.319241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.731 [2024-11-20 15:30:10.319259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.731 [2024-11-20 15:30:10.319281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:24.731 15:30:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:24.731 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:24.990 [2024-11-20 15:30:10.916309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:24.990 [2024-11-20 15:30:10.918644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.990 [2024-11-20 15:30:10.918700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.990 [2024-11-20 15:30:10.918725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.990 [2024-11-20 15:30:10.918753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.990 [2024-11-20 15:30:10.918775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.990 [2024-11-20 15:30:10.918790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.990 [2024-11-20 15:30:10.918809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.990 [2024-11-20 15:30:10.918823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.990 [2024-11-20 15:30:10.918840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.990 [2024-11-20 15:30:10.918855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:24.990 [2024-11-20 15:30:10.918872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.990 [2024-11-20 15:30:10.918886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:24.990 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:24.990 15:30:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.990 15:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:25.248 15:30:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.248 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:25.248 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:25.248 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:25.248 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:25.248 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:25.507 15:30:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:37.705 15:30:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.705 15:30:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:37.705 15:30:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:37.705 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:37.705 15:30:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.705 15:30:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:37.705 [2024-11-20 15:30:23.516602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:37.705 [2024-11-20 15:30:23.518594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.706 [2024-11-20 15:30:23.518663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.706 [2024-11-20 15:30:23.518694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.706 [2024-11-20 15:30:23.518722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.706 [2024-11-20 15:30:23.518734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.706 [2024-11-20 15:30:23.518753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.706 [2024-11-20 15:30:23.518768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.706 [2024-11-20 15:30:23.518786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.706 [2024-11-20 15:30:23.518798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.706 [2024-11-20 15:30:23.518814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.706 [2024-11-20 15:30:23.518826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.706 [2024-11-20 15:30:23.518841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.706 15:30:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.706 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:37.706 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:37.965 [2024-11-20 15:30:23.916631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:37.965 [2024-11-20 15:30:23.918531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.965 [2024-11-20 15:30:23.918599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.965 [2024-11-20 15:30:23.918620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.965 [2024-11-20 15:30:23.918643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.965 [2024-11-20 15:30:23.918658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.965 [2024-11-20 15:30:23.918670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.965 [2024-11-20 15:30:23.918686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.965 [2024-11-20 15:30:23.918697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.965 [2024-11-20 15:30:23.918712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.965 [2024-11-20 15:30:23.918726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:37.965 [2024-11-20 15:30:23.918743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.965 [2024-11-20 15:30:23.918755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.223 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:38.223 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:38.223 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:38.224 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:38.224 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:38.224 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:38.224 15:30:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.224 15:30:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:38.224 15:30:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.224 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:38.224 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:38.482 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:38.740 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:38.740 15:30:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.29 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.29 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.29 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.29 2 00:18:50.950 remove_attach_helper took 45.29s to complete (handling 2 nvme drive(s)) 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:50.950 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68901 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68901 ']' 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68901 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68901 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.950 killing process with pid 68901 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68901' 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68901 00:18:50.950 15:30:36 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68901 00:18:54.232 15:30:39 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:54.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:54.798 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:54.798 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:54.798 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:54.798 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:55.056 00:18:55.056 real 2m34.138s 00:18:55.056 user 1m51.982s 00:18:55.056 sys 0m22.626s 00:18:55.056 15:30:40 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.056 ************************************ 00:18:55.056 END TEST sw_hotplug 00:18:55.056 15:30:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:55.056 ************************************ 00:18:55.056 15:30:40 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:55.056 15:30:40 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:55.056 15:30:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.056 15:30:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.056 15:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:55.056 ************************************ 00:18:55.056 START TEST nvme_xnvme 00:18:55.056 ************************************ 00:18:55.056 15:30:40 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:55.056 * Looking for test storage... 00:18:55.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.056 15:30:40 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.056 15:30:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.056 15:30:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.319 15:30:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.319 --rc genhtml_branch_coverage=1 00:18:55.319 --rc genhtml_function_coverage=1 00:18:55.319 --rc genhtml_legend=1 00:18:55.319 --rc geninfo_all_blocks=1 00:18:55.319 --rc geninfo_unexecuted_blocks=1 00:18:55.319 00:18:55.319 ' 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.319 --rc genhtml_branch_coverage=1 00:18:55.319 --rc genhtml_function_coverage=1 00:18:55.319 --rc genhtml_legend=1 00:18:55.319 --rc geninfo_all_blocks=1 00:18:55.319 --rc geninfo_unexecuted_blocks=1 00:18:55.319 00:18:55.319 ' 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.319 --rc genhtml_branch_coverage=1 00:18:55.319 --rc genhtml_function_coverage=1 00:18:55.319 --rc genhtml_legend=1 00:18:55.319 --rc geninfo_all_blocks=1 00:18:55.319 --rc geninfo_unexecuted_blocks=1 00:18:55.319 00:18:55.319 ' 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.319 --rc genhtml_branch_coverage=1 00:18:55.319 --rc genhtml_function_coverage=1 00:18:55.319 --rc genhtml_legend=1 00:18:55.319 --rc geninfo_all_blocks=1 00:18:55.319 --rc geninfo_unexecuted_blocks=1 00:18:55.319 00:18:55.319 ' 00:18:55.319 15:30:41 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:55.319 15:30:41 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:55.319 15:30:41 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:55.319 15:30:41 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:55.320 15:30:41 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:55.320 15:30:41 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:55.320 #define SPDK_CONFIG_H 00:18:55.320 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:55.320 #define SPDK_CONFIG_APPS 1 00:18:55.320 #define SPDK_CONFIG_ARCH native 00:18:55.320 #define SPDK_CONFIG_ASAN 1 00:18:55.320 #undef SPDK_CONFIG_AVAHI 00:18:55.320 #undef SPDK_CONFIG_CET 00:18:55.320 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:55.320 #define SPDK_CONFIG_COVERAGE 1 00:18:55.320 #define SPDK_CONFIG_CROSS_PREFIX 00:18:55.320 #undef SPDK_CONFIG_CRYPTO 00:18:55.320 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:55.320 #undef SPDK_CONFIG_CUSTOMOCF 00:18:55.320 #undef SPDK_CONFIG_DAOS 00:18:55.320 #define SPDK_CONFIG_DAOS_DIR 00:18:55.320 #define SPDK_CONFIG_DEBUG 1 00:18:55.320 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:55.320 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:55.320 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:55.320 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:55.320 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:55.320 #undef SPDK_CONFIG_DPDK_UADK 00:18:55.320 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:55.320 #define SPDK_CONFIG_EXAMPLES 1 00:18:55.320 #undef SPDK_CONFIG_FC 00:18:55.320 #define SPDK_CONFIG_FC_PATH 00:18:55.320 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:55.320 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:55.320 #define SPDK_CONFIG_FSDEV 1 00:18:55.320 #undef SPDK_CONFIG_FUSE 00:18:55.320 #undef SPDK_CONFIG_FUZZER 00:18:55.320 #define SPDK_CONFIG_FUZZER_LIB 00:18:55.320 #undef SPDK_CONFIG_GOLANG 00:18:55.320 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:55.320 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:55.320 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:55.320 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:55.320 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:55.320 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:55.320 #undef SPDK_CONFIG_HAVE_LZ4 00:18:55.320 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:55.320 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:55.320 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:55.320 #define SPDK_CONFIG_IDXD 1 00:18:55.320 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:55.320 #undef SPDK_CONFIG_IPSEC_MB 00:18:55.320 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:55.320 #define SPDK_CONFIG_ISAL 1 00:18:55.320 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:55.320 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:55.320 #define SPDK_CONFIG_LIBDIR 00:18:55.320 #undef SPDK_CONFIG_LTO 00:18:55.320 #define SPDK_CONFIG_MAX_LCORES 128 00:18:55.320 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:55.320 #define SPDK_CONFIG_NVME_CUSE 1 00:18:55.320 #undef SPDK_CONFIG_OCF 00:18:55.320 #define SPDK_CONFIG_OCF_PATH 00:18:55.320 #define SPDK_CONFIG_OPENSSL_PATH 00:18:55.320 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:55.320 #define SPDK_CONFIG_PGO_DIR 00:18:55.320 #undef SPDK_CONFIG_PGO_USE 00:18:55.320 #define SPDK_CONFIG_PREFIX /usr/local 00:18:55.320 #undef SPDK_CONFIG_RAID5F 00:18:55.320 #undef SPDK_CONFIG_RBD 00:18:55.320 #define SPDK_CONFIG_RDMA 1 00:18:55.320 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:55.320 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:55.320 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:55.320 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:55.320 #define SPDK_CONFIG_SHARED 1 00:18:55.320 #undef SPDK_CONFIG_SMA 00:18:55.320 #define SPDK_CONFIG_TESTS 1 00:18:55.320 #undef SPDK_CONFIG_TSAN 00:18:55.320 #define SPDK_CONFIG_UBLK 1 00:18:55.320 #define SPDK_CONFIG_UBSAN 1 00:18:55.320 #undef SPDK_CONFIG_UNIT_TESTS 00:18:55.320 #undef SPDK_CONFIG_URING 00:18:55.320 #define SPDK_CONFIG_URING_PATH 00:18:55.320 #undef SPDK_CONFIG_URING_ZNS 00:18:55.320 #undef SPDK_CONFIG_USDT 00:18:55.320 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:55.320 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:55.320 #undef SPDK_CONFIG_VFIO_USER 00:18:55.320 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:55.320 #define SPDK_CONFIG_VHOST 1 00:18:55.320 #define SPDK_CONFIG_VIRTIO 1 00:18:55.320 #undef SPDK_CONFIG_VTUNE 00:18:55.320 #define SPDK_CONFIG_VTUNE_DIR 00:18:55.320 #define SPDK_CONFIG_WERROR 1 00:18:55.320 #define SPDK_CONFIG_WPDK_DIR 00:18:55.320 #define SPDK_CONFIG_XNVME 1 00:18:55.320 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:55.320 15:30:41 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:55.320 15:30:41 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.320 15:30:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.320 15:30:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.320 15:30:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.320 15:30:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.320 15:30:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.320 15:30:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.320 15:30:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.320 15:30:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:55.321 15:30:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:55.321 15:30:41 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:55.321 15:30:41 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70252 ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70252 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lHZ9Ek 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.lHZ9Ek/tests/xnvme /tmp/spdk.lHZ9Ek 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975666688 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592121344 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:55.322 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975666688 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592121344 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95126380544 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4576399360 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:55.323 * Looking for test storage... 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975666688 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.323 15:30:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.583 --rc genhtml_branch_coverage=1 00:18:55.583 --rc genhtml_function_coverage=1 00:18:55.583 --rc genhtml_legend=1 00:18:55.583 --rc geninfo_all_blocks=1 00:18:55.583 --rc geninfo_unexecuted_blocks=1 00:18:55.583 00:18:55.583 ' 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.583 --rc genhtml_branch_coverage=1 00:18:55.583 --rc genhtml_function_coverage=1 00:18:55.583 --rc genhtml_legend=1 00:18:55.583 --rc geninfo_all_blocks=1 00:18:55.583 --rc geninfo_unexecuted_blocks=1 00:18:55.583 00:18:55.583 ' 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.583 --rc genhtml_branch_coverage=1 00:18:55.583 --rc genhtml_function_coverage=1 00:18:55.583 --rc genhtml_legend=1 00:18:55.583 --rc geninfo_all_blocks=1 00:18:55.583 --rc geninfo_unexecuted_blocks=1 00:18:55.583 00:18:55.583 ' 00:18:55.583 15:30:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.583 --rc genhtml_branch_coverage=1 00:18:55.583 --rc genhtml_function_coverage=1 00:18:55.583 --rc genhtml_legend=1 00:18:55.583 --rc geninfo_all_blocks=1 00:18:55.583 --rc geninfo_unexecuted_blocks=1 00:18:55.583 00:18:55.583 ' 00:18:55.583 15:30:41 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.583 15:30:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.583 15:30:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.583 15:30:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.583 15:30:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.583 15:30:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:55.583 15:30:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:55.583 15:30:41 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:55.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:56.101 Waiting for block devices as requested 00:18:56.101 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:56.359 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:56.359 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:56.618 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:01.893 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:01.893 15:30:47 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:19:01.893 15:30:47 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:19:01.893 15:30:47 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:19:02.151 15:30:48 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:19:02.152 15:30:48 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:19:02.152 15:30:48 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:19:02.152 15:30:48 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:02.152 15:30:48 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:02.411 No valid GPT data, bailing 00:19:02.411 15:30:48 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:02.411 15:30:48 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:19:02.411 15:30:48 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:02.411 15:30:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:02.411 15:30:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:02.411 15:30:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.411 15:30:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 ************************************ 00:19:02.411 START TEST xnvme_rpc 00:19:02.411 ************************************ 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70649 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70649 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70649 ']' 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.411 15:30:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 [2024-11-20 15:30:48.311648] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:02.411 [2024-11-20 15:30:48.311828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70649 ] 00:19:02.670 [2024-11-20 15:30:48.517498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.929 [2024-11-20 15:30:48.685779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.863 xnvme_bdev 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.863 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:04.120 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70649 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70649 ']' 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70649 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70649 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.121 killing process with pid 70649 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70649' 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70649 00:19:04.121 15:30:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70649 00:19:07.421 00:19:07.421 real 0m4.732s 00:19:07.421 user 0m4.901s 00:19:07.421 sys 0m0.616s 00:19:07.421 15:30:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.421 15:30:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.421 ************************************ 00:19:07.421 END TEST xnvme_rpc 00:19:07.421 ************************************ 00:19:07.421 15:30:52 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:07.421 15:30:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.421 15:30:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.421 15:30:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.421 ************************************ 00:19:07.421 START TEST xnvme_bdevperf 00:19:07.421 ************************************ 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:07.421 15:30:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:07.421 { 00:19:07.421 "subsystems": [ 00:19:07.421 { 00:19:07.421 "subsystem": "bdev", 00:19:07.421 "config": [ 00:19:07.421 { 00:19:07.421 "params": { 00:19:07.421 "io_mechanism": "libaio", 00:19:07.421 "conserve_cpu": false, 00:19:07.421 "filename": "/dev/nvme0n1", 00:19:07.421 "name": "xnvme_bdev" 00:19:07.421 }, 00:19:07.421 "method": "bdev_xnvme_create" 00:19:07.421 }, 00:19:07.421 { 00:19:07.421 "method": "bdev_wait_for_examine" 00:19:07.421 } 00:19:07.421 ] 00:19:07.421 } 00:19:07.421 ] 00:19:07.421 } 00:19:07.421 [2024-11-20 15:30:53.070253] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:07.421 [2024-11-20 15:30:53.070440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:19:07.421 [2024-11-20 15:30:53.269553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.706 [2024-11-20 15:30:53.472197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.273 Running I/O for 5 seconds... 00:19:10.143 24404.00 IOPS, 95.33 MiB/s [2024-11-20T15:30:57.036Z] 25721.50 IOPS, 100.47 MiB/s [2024-11-20T15:30:58.410Z] 27901.00 IOPS, 108.99 MiB/s [2024-11-20T15:30:59.357Z] 28868.25 IOPS, 112.77 MiB/s 00:19:13.399 Latency(us) 00:19:13.399 [2024-11-20T15:30:59.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.399 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:13.399 xnvme_bdev : 5.00 29243.59 114.23 0.00 0.00 2183.42 255.51 39446.43 00:19:13.399 [2024-11-20T15:30:59.357Z] =================================================================================================================== 00:19:13.399 [2024-11-20T15:30:59.357Z] Total : 29243.59 114.23 0.00 0.00 2183.42 255.51 39446.43 00:19:14.775 15:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:14.775 15:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:14.775 15:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:14.775 15:31:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:14.775 15:31:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:14.775 { 00:19:14.775 "subsystems": [ 00:19:14.775 { 00:19:14.775 "subsystem": "bdev", 00:19:14.775 "config": [ 00:19:14.775 { 00:19:14.775 "params": { 00:19:14.775 "io_mechanism": "libaio", 00:19:14.775 "conserve_cpu": false, 00:19:14.775 "filename": "/dev/nvme0n1", 00:19:14.775 "name": "xnvme_bdev" 00:19:14.775 }, 00:19:14.775 "method": "bdev_xnvme_create" 00:19:14.775 }, 00:19:14.775 { 00:19:14.775 "method": "bdev_wait_for_examine" 00:19:14.775 } 00:19:14.775 ] 00:19:14.775 } 00:19:14.775 ] 00:19:14.775 } 00:19:14.775 [2024-11-20 15:31:00.520710] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:14.775 [2024-11-20 15:31:00.520891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70826 ] 00:19:14.775 [2024-11-20 15:31:00.725755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.034 [2024-11-20 15:31:00.892356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.601 Running I/O for 5 seconds... 00:19:17.515 31827.00 IOPS, 124.32 MiB/s [2024-11-20T15:31:04.411Z] 32937.00 IOPS, 128.66 MiB/s [2024-11-20T15:31:05.349Z] 33073.67 IOPS, 129.19 MiB/s [2024-11-20T15:31:06.728Z] 31871.25 IOPS, 124.50 MiB/s 00:19:20.770 Latency(us) 00:19:20.770 [2024-11-20T15:31:06.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.770 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:20.770 xnvme_bdev : 5.00 31227.68 121.98 0.00 0.00 2044.37 193.10 6709.64 00:19:20.770 [2024-11-20T15:31:06.728Z] =================================================================================================================== 00:19:20.770 [2024-11-20T15:31:06.728Z] Total : 31227.68 121.98 0.00 0.00 2044.37 193.10 6709.64 00:19:21.714 00:19:21.714 real 0m14.708s 00:19:21.714 user 0m5.911s 00:19:21.714 sys 0m5.990s 00:19:21.714 15:31:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.714 ************************************ 00:19:21.714 END TEST xnvme_bdevperf 00:19:21.714 ************************************ 00:19:21.714 15:31:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:21.973 15:31:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:21.973 15:31:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.973 15:31:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.973 15:31:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.973 ************************************ 00:19:21.973 START TEST xnvme_fio_plugin 00:19:21.973 ************************************ 00:19:21.973 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:21.973 15:31:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:21.973 15:31:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:21.973 15:31:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:21.973 15:31:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:21.974 15:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:21.974 { 00:19:21.974 "subsystems": [ 00:19:21.974 { 00:19:21.974 "subsystem": "bdev", 00:19:21.974 "config": [ 00:19:21.974 { 00:19:21.974 "params": { 00:19:21.974 "io_mechanism": "libaio", 00:19:21.974 "conserve_cpu": false, 00:19:21.974 "filename": "/dev/nvme0n1", 00:19:21.974 "name": "xnvme_bdev" 00:19:21.974 }, 00:19:21.974 "method": "bdev_xnvme_create" 00:19:21.974 }, 00:19:21.974 { 00:19:21.974 "method": "bdev_wait_for_examine" 00:19:21.974 } 00:19:21.974 ] 00:19:21.974 } 00:19:21.974 ] 00:19:21.974 } 00:19:22.234 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:22.234 fio-3.35 00:19:22.234 Starting 1 thread 00:19:28.827 00:19:28.827 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70951: Wed Nov 20 15:31:13 2024 00:19:28.827 read: IOPS=26.8k, BW=105MiB/s (110MB/s)(523MiB/5001msec) 00:19:28.827 slat (usec): min=5, max=615, avg=33.36, stdev=26.23 00:19:28.827 clat (usec): min=141, max=5566, avg=1343.73, stdev=683.45 00:19:28.827 lat (usec): min=198, max=5658, avg=1377.09, stdev=684.06 00:19:28.827 clat percentiles (usec): 00:19:28.827 | 1.00th=[ 253], 5.00th=[ 371], 10.00th=[ 482], 20.00th=[ 693], 00:19:28.827 | 30.00th=[ 898], 40.00th=[ 1090], 50.00th=[ 1303], 60.00th=[ 1500], 00:19:28.827 | 70.00th=[ 1713], 80.00th=[ 1942], 90.00th=[ 2212], 95.00th=[ 2442], 00:19:28.827 | 99.00th=[ 3228], 99.50th=[ 3785], 99.90th=[ 4555], 99.95th=[ 4686], 00:19:28.827 | 99.99th=[ 5080] 00:19:28.827 bw ( KiB/s): min=97872, max=115696, per=99.07%, avg=106110.22, stdev=6051.15, samples=9 00:19:28.827 iops : min=24468, max=28924, avg=26527.56, stdev=1512.79, samples=9 00:19:28.827 lat (usec) : 250=0.93%, 500=10.04%, 750=11.96%, 1000=12.33% 00:19:28.827 lat (msec) : 2=47.12%, 4=17.25%, 10=0.37% 00:19:28.827 cpu : usr=21.58%, sys=54.32%, ctx=76, majf=0, minf=764 00:19:28.827 IO depths : 1=0.1%, 2=1.0%, 4=5.2%, 8=12.7%, 16=26.2%, 32=53.2%, >=64=1.7% 00:19:28.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.827 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:28.827 issued rwts: total=133916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.827 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:28.827 00:19:28.827 Run status group 0 (all jobs): 00:19:28.827 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=523MiB (549MB), run=5001-5001msec 00:19:29.764 ----------------------------------------------------- 00:19:29.764 Suppressions used: 00:19:29.764 count bytes template 00:19:29.764 1 11 /usr/src/fio/parse.c 00:19:29.764 1 8 libtcmalloc_minimal.so 00:19:29.764 1 904 libcrypto.so 00:19:29.764 ----------------------------------------------------- 00:19:29.764 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:29.764 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:29.765 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:29.765 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:29.765 15:31:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:29.765 { 00:19:29.765 "subsystems": [ 00:19:29.765 { 00:19:29.765 "subsystem": "bdev", 00:19:29.765 "config": [ 00:19:29.765 { 00:19:29.765 "params": { 00:19:29.765 "io_mechanism": "libaio", 00:19:29.765 "conserve_cpu": false, 00:19:29.765 "filename": "/dev/nvme0n1", 00:19:29.765 "name": "xnvme_bdev" 00:19:29.765 }, 00:19:29.765 "method": "bdev_xnvme_create" 00:19:29.765 }, 00:19:29.765 { 00:19:29.765 "method": "bdev_wait_for_examine" 00:19:29.765 } 00:19:29.765 ] 00:19:29.765 } 00:19:29.765 ] 00:19:29.765 } 00:19:30.024 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:30.024 fio-3.35 00:19:30.024 Starting 1 thread 00:19:36.591 00:19:36.591 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71054: Wed Nov 20 15:31:21 2024 00:19:36.592 write: IOPS=28.6k, BW=112MiB/s (117MB/s)(558MiB/5001msec); 0 zone resets 00:19:36.592 slat (usec): min=4, max=2241, avg=31.33, stdev=28.27 00:19:36.592 clat (usec): min=103, max=5462, avg=1255.37, stdev=703.67 00:19:36.592 lat (usec): min=167, max=5497, avg=1286.70, stdev=706.91 00:19:36.592 clat percentiles (usec): 00:19:36.592 | 1.00th=[ 231], 5.00th=[ 330], 10.00th=[ 429], 20.00th=[ 611], 00:19:36.592 | 30.00th=[ 783], 40.00th=[ 963], 50.00th=[ 1139], 60.00th=[ 1352], 00:19:36.592 | 70.00th=[ 1582], 80.00th=[ 1876], 90.00th=[ 2212], 95.00th=[ 2442], 00:19:36.592 | 99.00th=[ 3359], 99.50th=[ 3851], 99.90th=[ 4490], 99.95th=[ 4686], 00:19:36.592 | 99.99th=[ 5080] 00:19:36.592 bw ( KiB/s): min=96616, max=138432, per=100.00%, avg=115789.33, stdev=15125.64, samples=9 00:19:36.592 iops : min=24154, max=34608, avg=28947.33, stdev=3781.41, samples=9 00:19:36.592 lat (usec) : 250=1.56%, 500=12.35%, 750=14.05%, 1000=14.35% 00:19:36.592 lat (msec) : 2=41.56%, 4=15.75%, 10=0.37% 00:19:36.592 cpu : usr=21.52%, sys=55.68%, ctx=91, majf=0, minf=765 00:19:36.592 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=12.3%, 16=25.9%, 32=53.2%, >=64=1.7% 00:19:36.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.592 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:36.592 issued rwts: total=0,142843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.592 00:19:36.592 Run status group 0 (all jobs): 00:19:36.592 WRITE: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=558MiB (585MB), run=5001-5001msec 00:19:37.160 ----------------------------------------------------- 00:19:37.160 Suppressions used: 00:19:37.160 count bytes template 00:19:37.160 1 11 /usr/src/fio/parse.c 00:19:37.160 1 8 libtcmalloc_minimal.so 00:19:37.160 1 904 libcrypto.so 00:19:37.160 ----------------------------------------------------- 00:19:37.160 00:19:37.160 00:19:37.160 real 0m15.179s 00:19:37.160 user 0m6.170s 00:19:37.160 sys 0m6.315s 00:19:37.160 15:31:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.160 ************************************ 00:19:37.160 END TEST xnvme_fio_plugin 00:19:37.160 ************************************ 00:19:37.160 15:31:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 15:31:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:37.160 15:31:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:37.160 15:31:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:37.160 15:31:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:37.160 15:31:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.160 15:31:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.160 15:31:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 ************************************ 00:19:37.160 START TEST xnvme_rpc 00:19:37.160 ************************************ 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71139 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71139 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71139 ']' 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.160 15:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.160 [2024-11-20 15:31:23.105477] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:37.160 [2024-11-20 15:31:23.105992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:19:37.420 [2024-11-20 15:31:23.300060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.678 [2024-11-20 15:31:23.419882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 xnvme_bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71139 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71139 ']' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71139 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71139 00:19:38.618 killing process with pid 71139 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71139' 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71139 00:19:38.618 15:31:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71139 00:19:41.159 ************************************ 00:19:41.159 END TEST xnvme_rpc 00:19:41.159 ************************************ 00:19:41.159 00:19:41.159 real 0m4.036s 00:19:41.159 user 0m4.064s 00:19:41.159 sys 0m0.570s 00:19:41.159 15:31:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.159 15:31:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 15:31:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:41.159 15:31:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:41.159 15:31:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.159 15:31:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 ************************************ 00:19:41.159 START TEST xnvme_bdevperf 00:19:41.159 ************************************ 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:41.159 15:31:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 { 00:19:41.159 "subsystems": [ 00:19:41.159 { 00:19:41.159 "subsystem": "bdev", 00:19:41.159 "config": [ 00:19:41.159 { 00:19:41.159 "params": { 00:19:41.159 "io_mechanism": "libaio", 00:19:41.159 "conserve_cpu": true, 00:19:41.159 "filename": "/dev/nvme0n1", 00:19:41.159 "name": "xnvme_bdev" 00:19:41.159 }, 00:19:41.159 "method": "bdev_xnvme_create" 00:19:41.159 }, 00:19:41.159 { 00:19:41.159 "method": "bdev_wait_for_examine" 00:19:41.159 } 00:19:41.159 ] 00:19:41.159 } 00:19:41.159 ] 00:19:41.159 } 00:19:41.419 [2024-11-20 15:31:27.172877] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:41.419 [2024-11-20 15:31:27.173062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71220 ] 00:19:41.419 [2024-11-20 15:31:27.367549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.678 [2024-11-20 15:31:27.490500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.937 Running I/O for 5 seconds... 00:19:44.252 30534.00 IOPS, 119.27 MiB/s [2024-11-20T15:31:31.145Z] 31686.00 IOPS, 123.77 MiB/s [2024-11-20T15:31:32.161Z] 31755.33 IOPS, 124.04 MiB/s [2024-11-20T15:31:33.097Z] 32313.75 IOPS, 126.23 MiB/s [2024-11-20T15:31:33.097Z] 32159.80 IOPS, 125.62 MiB/s 00:19:47.139 Latency(us) 00:19:47.139 [2024-11-20T15:31:33.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.139 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:47.139 xnvme_bdev : 5.01 32140.40 125.55 0.00 0.00 1986.46 473.97 9986.44 00:19:47.139 [2024-11-20T15:31:33.097Z] =================================================================================================================== 00:19:47.139 [2024-11-20T15:31:33.097Z] Total : 32140.40 125.55 0.00 0.00 1986.46 473.97 9986.44 00:19:48.516 15:31:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.516 15:31:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:48.516 15:31:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:48.516 15:31:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:48.516 15:31:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.516 { 00:19:48.516 "subsystems": [ 00:19:48.516 { 00:19:48.516 "subsystem": "bdev", 00:19:48.516 "config": [ 00:19:48.516 { 00:19:48.516 "params": { 00:19:48.516 "io_mechanism": "libaio", 00:19:48.516 "conserve_cpu": true, 00:19:48.516 "filename": "/dev/nvme0n1", 00:19:48.516 "name": "xnvme_bdev" 00:19:48.516 }, 00:19:48.516 "method": "bdev_xnvme_create" 00:19:48.516 }, 00:19:48.516 { 00:19:48.516 "method": "bdev_wait_for_examine" 00:19:48.516 } 00:19:48.516 ] 00:19:48.516 } 00:19:48.516 ] 00:19:48.516 } 00:19:48.516 [2024-11-20 15:31:34.181925] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:48.516 [2024-11-20 15:31:34.182430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71301 ] 00:19:48.516 [2024-11-20 15:31:34.369890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.776 [2024-11-20 15:31:34.494150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.035 Running I/O for 5 seconds... 00:19:51.348 36712.00 IOPS, 143.41 MiB/s [2024-11-20T15:31:38.240Z] 33061.50 IOPS, 129.15 MiB/s [2024-11-20T15:31:39.177Z] 32107.33 IOPS, 125.42 MiB/s [2024-11-20T15:31:40.112Z] 32002.75 IOPS, 125.01 MiB/s 00:19:54.154 Latency(us) 00:19:54.154 [2024-11-20T15:31:40.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.154 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:54.154 xnvme_bdev : 5.00 31982.05 124.93 0.00 0.00 1996.11 96.06 7333.79 00:19:54.154 [2024-11-20T15:31:40.112Z] =================================================================================================================== 00:19:54.154 [2024-11-20T15:31:40.112Z] Total : 31982.05 124.93 0.00 0.00 1996.11 96.06 7333.79 00:19:55.530 00:19:55.530 real 0m14.180s 00:19:55.530 user 0m5.503s 00:19:55.530 sys 0m5.932s 00:19:55.530 ************************************ 00:19:55.530 END TEST xnvme_bdevperf 00:19:55.530 ************************************ 00:19:55.530 15:31:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.530 15:31:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.530 15:31:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:55.530 15:31:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:55.530 15:31:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.530 15:31:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.530 ************************************ 00:19:55.530 START TEST xnvme_fio_plugin 00:19:55.530 ************************************ 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.530 15:31:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.530 { 00:19:55.530 "subsystems": [ 00:19:55.530 { 00:19:55.530 "subsystem": "bdev", 00:19:55.530 "config": [ 00:19:55.530 { 00:19:55.530 "params": { 00:19:55.530 "io_mechanism": "libaio", 00:19:55.530 "conserve_cpu": true, 00:19:55.530 "filename": "/dev/nvme0n1", 00:19:55.530 "name": "xnvme_bdev" 00:19:55.530 }, 00:19:55.530 "method": "bdev_xnvme_create" 00:19:55.530 }, 00:19:55.530 { 00:19:55.530 "method": "bdev_wait_for_examine" 00:19:55.530 } 00:19:55.530 ] 00:19:55.530 } 00:19:55.530 ] 00:19:55.530 } 00:19:55.789 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:55.789 fio-3.35 00:19:55.789 Starting 1 thread 00:20:02.382 00:20:02.382 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71426: Wed Nov 20 15:31:47 2024 00:20:02.382 read: IOPS=27.0k, BW=105MiB/s (111MB/s)(527MiB/5001msec) 00:20:02.382 slat (usec): min=5, max=1722, avg=33.26, stdev=26.65 00:20:02.382 clat (usec): min=102, max=7222, avg=1314.80, stdev=730.93 00:20:02.382 lat (usec): min=153, max=7274, avg=1348.06, stdev=733.52 00:20:02.382 clat percentiles (usec): 00:20:02.382 | 1.00th=[ 227], 5.00th=[ 330], 10.00th=[ 437], 20.00th=[ 635], 00:20:02.382 | 30.00th=[ 832], 40.00th=[ 1029], 50.00th=[ 1221], 60.00th=[ 1434], 00:20:02.382 | 70.00th=[ 1663], 80.00th=[ 1942], 90.00th=[ 2245], 95.00th=[ 2540], 00:20:02.382 | 99.00th=[ 3490], 99.50th=[ 3982], 99.90th=[ 4752], 99.95th=[ 5014], 00:20:02.382 | 99.99th=[ 5997] 00:20:02.382 bw ( KiB/s): min=92112, max=132768, per=99.21%, avg=107098.67, stdev=13224.22, samples=9 00:20:02.382 iops : min=23028, max=33192, avg=26774.67, stdev=3306.06, samples=9 00:20:02.382 lat (usec) : 250=1.73%, 500=11.38%, 750=12.66%, 1000=12.88% 00:20:02.383 lat (msec) : 2=43.70%, 4=17.16%, 10=0.49% 00:20:02.383 cpu : usr=21.00%, sys=54.54%, ctx=141, majf=0, minf=764 00:20:02.383 IO depths : 1=0.1%, 2=1.6%, 4=5.5%, 8=12.4%, 16=25.8%, 32=52.9%, >=64=1.7% 00:20:02.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.383 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:02.383 issued rwts: total=134959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:02.383 00:20:02.383 Run status group 0 (all jobs): 00:20:02.383 READ: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=527MiB (553MB), run=5001-5001msec 00:20:03.320 ----------------------------------------------------- 00:20:03.320 Suppressions used: 00:20:03.320 count bytes template 00:20:03.320 1 11 /usr/src/fio/parse.c 00:20:03.320 1 8 libtcmalloc_minimal.so 00:20:03.320 1 904 libcrypto.so 00:20:03.320 ----------------------------------------------------- 00:20:03.320 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:03.320 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:03.321 15:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.321 { 00:20:03.321 "subsystems": [ 00:20:03.321 { 00:20:03.321 "subsystem": "bdev", 00:20:03.321 "config": [ 00:20:03.321 { 00:20:03.321 "params": { 00:20:03.321 "io_mechanism": "libaio", 00:20:03.321 "conserve_cpu": true, 00:20:03.321 "filename": "/dev/nvme0n1", 00:20:03.321 "name": "xnvme_bdev" 00:20:03.321 }, 00:20:03.321 "method": "bdev_xnvme_create" 00:20:03.321 }, 00:20:03.321 { 00:20:03.321 "method": "bdev_wait_for_examine" 00:20:03.321 } 00:20:03.321 ] 00:20:03.321 } 00:20:03.321 ] 00:20:03.321 } 00:20:03.321 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:03.321 fio-3.35 00:20:03.321 Starting 1 thread 00:20:09.887 00:20:09.887 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71523: Wed Nov 20 15:31:55 2024 00:20:09.887 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec); 0 zone resets 00:20:09.887 slat (usec): min=4, max=814, avg=26.64, stdev=27.44 00:20:09.887 clat (usec): min=103, max=5990, avg=1107.00, stdev=637.05 00:20:09.887 lat (usec): min=163, max=6045, avg=1133.63, stdev=640.89 00:20:09.887 clat percentiles (usec): 00:20:09.887 | 1.00th=[ 219], 5.00th=[ 322], 10.00th=[ 412], 20.00th=[ 570], 00:20:09.887 | 30.00th=[ 701], 40.00th=[ 832], 50.00th=[ 963], 60.00th=[ 1123], 00:20:09.887 | 70.00th=[ 1336], 80.00th=[ 1598], 90.00th=[ 2008], 95.00th=[ 2311], 00:20:09.887 | 99.00th=[ 2933], 99.50th=[ 3392], 99.90th=[ 4490], 99.95th=[ 4817], 00:20:09.887 | 99.99th=[ 5342] 00:20:09.887 bw ( KiB/s): min=97392, max=210696, per=100.00%, avg=134000.00, stdev=35167.20, samples=9 00:20:09.887 iops : min=24348, max=52674, avg=33499.89, stdev=8791.76, samples=9 00:20:09.887 lat (usec) : 250=2.01%, 500=13.35%, 750=18.48%, 1000=18.44% 00:20:09.887 lat (msec) : 2=37.46%, 4=10.05%, 10=0.20% 00:20:09.887 cpu : usr=24.14%, sys=54.04%, ctx=58, majf=0, minf=765 00:20:09.887 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=11.4%, 16=25.5%, 32=55.4%, >=64=1.8% 00:20:09.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.887 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:09.887 issued rwts: total=0,166163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:09.887 00:20:09.887 Run status group 0 (all jobs): 00:20:09.887 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (681MB), run=5001-5001msec 00:20:10.825 ----------------------------------------------------- 00:20:10.825 Suppressions used: 00:20:10.825 count bytes template 00:20:10.825 1 11 /usr/src/fio/parse.c 00:20:10.825 1 8 libtcmalloc_minimal.so 00:20:10.825 1 904 libcrypto.so 00:20:10.825 ----------------------------------------------------- 00:20:10.825 00:20:10.825 ************************************ 00:20:10.825 END TEST xnvme_fio_plugin 00:20:10.825 ************************************ 00:20:10.825 00:20:10.825 real 0m15.271s 00:20:10.825 user 0m6.371s 00:20:10.825 sys 0m6.228s 00:20:10.825 15:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.825 15:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:10.825 15:31:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:10.825 15:31:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:10.825 15:31:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.825 15:31:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.825 ************************************ 00:20:10.825 START TEST xnvme_rpc 00:20:10.825 ************************************ 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:10.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71615 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71615 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71615 ']' 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.825 15:31:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.825 [2024-11-20 15:31:56.768315] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:10.826 [2024-11-20 15:31:56.768785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:20:11.084 [2024-11-20 15:31:56.962812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.344 [2024-11-20 15:31:57.071718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 xnvme_bdev 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 15:31:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71615 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71615 ']' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71615 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71615 00:20:12.282 killing process with pid 71615 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71615' 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71615 00:20:12.282 15:31:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71615 00:20:14.860 ************************************ 00:20:14.860 END TEST xnvme_rpc 00:20:14.860 ************************************ 00:20:14.860 00:20:14.860 real 0m3.959s 00:20:14.860 user 0m4.022s 00:20:14.860 sys 0m0.547s 00:20:14.860 15:32:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.860 15:32:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:14.860 15:32:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:14.860 15:32:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.860 15:32:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.860 15:32:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.860 ************************************ 00:20:14.860 START TEST xnvme_bdevperf 00:20:14.860 ************************************ 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:14.860 15:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.860 { 00:20:14.860 "subsystems": [ 00:20:14.860 { 00:20:14.860 "subsystem": "bdev", 00:20:14.860 "config": [ 00:20:14.860 { 00:20:14.860 "params": { 00:20:14.860 "io_mechanism": "io_uring", 00:20:14.860 "conserve_cpu": false, 00:20:14.860 "filename": "/dev/nvme0n1", 00:20:14.860 "name": "xnvme_bdev" 00:20:14.860 }, 00:20:14.860 "method": "bdev_xnvme_create" 00:20:14.860 }, 00:20:14.860 { 00:20:14.860 "method": "bdev_wait_for_examine" 00:20:14.860 } 00:20:14.860 ] 00:20:14.860 } 00:20:14.860 ] 00:20:14.860 } 00:20:14.860 [2024-11-20 15:32:00.723825] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:14.860 [2024-11-20 15:32:00.724445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71696 ] 00:20:15.119 [2024-11-20 15:32:00.897802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.119 [2024-11-20 15:32:01.013745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.687 Running I/O for 5 seconds... 00:20:17.560 45438.00 IOPS, 177.49 MiB/s [2024-11-20T15:32:04.455Z] 47030.50 IOPS, 183.71 MiB/s [2024-11-20T15:32:05.393Z] 48798.67 IOPS, 190.62 MiB/s [2024-11-20T15:32:06.795Z] 50051.50 IOPS, 195.51 MiB/s 00:20:20.837 Latency(us) 00:20:20.837 [2024-11-20T15:32:06.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.837 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:20.837 xnvme_bdev : 5.00 51135.52 199.75 0.00 0.00 1247.81 360.84 6865.68 00:20:20.837 [2024-11-20T15:32:06.795Z] =================================================================================================================== 00:20:20.837 [2024-11-20T15:32:06.795Z] Total : 51135.52 199.75 0.00 0.00 1247.81 360.84 6865.68 00:20:21.770 15:32:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:21.770 15:32:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:21.770 15:32:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:21.770 15:32:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:21.770 15:32:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:21.770 { 00:20:21.770 "subsystems": [ 00:20:21.770 { 00:20:21.770 "subsystem": "bdev", 00:20:21.770 "config": [ 00:20:21.770 { 00:20:21.770 "params": { 00:20:21.770 "io_mechanism": "io_uring", 00:20:21.770 "conserve_cpu": false, 00:20:21.770 "filename": "/dev/nvme0n1", 00:20:21.770 "name": "xnvme_bdev" 00:20:21.770 }, 00:20:21.770 "method": "bdev_xnvme_create" 00:20:21.770 }, 00:20:21.770 { 00:20:21.770 "method": "bdev_wait_for_examine" 00:20:21.770 } 00:20:21.770 ] 00:20:21.770 } 00:20:21.770 ] 00:20:21.770 } 00:20:21.770 [2024-11-20 15:32:07.680380] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:21.770 [2024-11-20 15:32:07.680530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71776 ] 00:20:22.028 [2024-11-20 15:32:07.874498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.286 [2024-11-20 15:32:07.988104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.544 Running I/O for 5 seconds... 00:20:24.415 43264.00 IOPS, 169.00 MiB/s [2024-11-20T15:32:11.748Z] 44254.50 IOPS, 172.87 MiB/s [2024-11-20T15:32:12.683Z] 43990.00 IOPS, 171.84 MiB/s [2024-11-20T15:32:13.618Z] 44176.25 IOPS, 172.56 MiB/s 00:20:27.660 Latency(us) 00:20:27.660 [2024-11-20T15:32:13.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.660 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:27.660 xnvme_bdev : 5.00 44744.66 174.78 0.00 0.00 1425.69 158.96 7427.41 00:20:27.660 [2024-11-20T15:32:13.618Z] =================================================================================================================== 00:20:27.660 [2024-11-20T15:32:13.618Z] Total : 44744.66 174.78 0.00 0.00 1425.69 158.96 7427.41 00:20:28.593 00:20:28.593 real 0m13.832s 00:20:28.593 user 0m6.511s 00:20:28.593 sys 0m7.117s 00:20:28.593 15:32:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.593 ************************************ 00:20:28.593 END TEST xnvme_bdevperf 00:20:28.593 ************************************ 00:20:28.593 15:32:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:28.593 15:32:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:28.593 15:32:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.593 15:32:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.593 15:32:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:28.593 ************************************ 00:20:28.593 START TEST xnvme_fio_plugin 00:20:28.593 ************************************ 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:28.593 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.852 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:28.852 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:28.852 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:28.852 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:28.852 15:32:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.852 { 00:20:28.852 "subsystems": [ 00:20:28.852 { 00:20:28.852 "subsystem": "bdev", 00:20:28.852 "config": [ 00:20:28.852 { 00:20:28.852 "params": { 00:20:28.852 "io_mechanism": "io_uring", 00:20:28.852 "conserve_cpu": false, 00:20:28.852 "filename": "/dev/nvme0n1", 00:20:28.852 "name": "xnvme_bdev" 00:20:28.852 }, 00:20:28.852 "method": "bdev_xnvme_create" 00:20:28.852 }, 00:20:28.852 { 00:20:28.852 "method": "bdev_wait_for_examine" 00:20:28.852 } 00:20:28.852 ] 00:20:28.852 } 00:20:28.852 ] 00:20:28.852 } 00:20:28.852 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:28.852 fio-3.35 00:20:28.852 Starting 1 thread 00:20:35.482 00:20:35.482 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71897: Wed Nov 20 15:32:20 2024 00:20:35.482 read: IOPS=48.2k, BW=188MiB/s (197MB/s)(941MiB/5001msec) 00:20:35.482 slat (nsec): min=2337, max=37724, avg=3768.90, stdev=1162.71 00:20:35.482 clat (usec): min=176, max=36802, avg=1181.81, stdev=304.24 00:20:35.482 lat (usec): min=181, max=36809, avg=1185.58, stdev=304.41 00:20:35.482 clat percentiles (usec): 00:20:35.482 | 1.00th=[ 865], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1045], 00:20:35.482 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:20:35.482 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1467], 00:20:35.482 | 99.00th=[ 2040], 99.50th=[ 2409], 99.90th=[ 4015], 99.95th=[ 4883], 00:20:35.482 | 99.99th=[ 7177] 00:20:35.482 bw ( KiB/s): min=174048, max=209408, per=99.98%, avg=192689.67, stdev=9877.34, samples=9 00:20:35.482 iops : min=43512, max=52352, avg=48172.33, stdev=2469.25, samples=9 00:20:35.482 lat (usec) : 250=0.01%, 500=0.06%, 750=0.26%, 1000=10.74% 00:20:35.482 lat (msec) : 2=87.83%, 4=1.01%, 10=0.09%, 20=0.01%, 50=0.01% 00:20:35.482 cpu : usr=31.92%, sys=67.26%, ctx=15, majf=0, minf=762 00:20:35.482 IO depths : 1=1.3%, 2=2.8%, 4=5.9%, 8=12.2%, 16=25.1%, 32=51.1%, >=64=1.6% 00:20:35.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.482 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:35.482 issued rwts: total=240950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:35.482 00:20:35.482 Run status group 0 (all jobs): 00:20:35.482 READ: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=941MiB (987MB), run=5001-5001msec 00:20:36.050 ----------------------------------------------------- 00:20:36.051 Suppressions used: 00:20:36.051 count bytes template 00:20:36.051 1 11 /usr/src/fio/parse.c 00:20:36.051 1 8 libtcmalloc_minimal.so 00:20:36.051 1 904 libcrypto.so 00:20:36.051 ----------------------------------------------------- 00:20:36.051 00:20:36.051 15:32:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:36.051 15:32:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:36.051 15:32:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:36.051 15:32:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:36.051 15:32:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:36.051 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.310 15:32:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.310 { 00:20:36.310 "subsystems": [ 00:20:36.310 { 00:20:36.310 "subsystem": "bdev", 00:20:36.310 "config": [ 00:20:36.310 { 00:20:36.310 "params": { 00:20:36.310 "io_mechanism": "io_uring", 00:20:36.310 "conserve_cpu": false, 00:20:36.310 "filename": "/dev/nvme0n1", 00:20:36.310 "name": "xnvme_bdev" 00:20:36.310 }, 00:20:36.310 "method": "bdev_xnvme_create" 00:20:36.310 }, 00:20:36.310 { 00:20:36.310 "method": "bdev_wait_for_examine" 00:20:36.310 } 00:20:36.310 ] 00:20:36.310 } 00:20:36.310 ] 00:20:36.310 } 00:20:36.569 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:36.569 fio-3.35 00:20:36.569 Starting 1 thread 00:20:43.136 00:20:43.136 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71996: Wed Nov 20 15:32:28 2024 00:20:43.136 write: IOPS=51.7k, BW=202MiB/s (212MB/s)(1011MiB/5002msec); 0 zone resets 00:20:43.136 slat (nsec): min=2804, max=47504, avg=3673.77, stdev=1142.58 00:20:43.136 clat (usec): min=439, max=4528, avg=1093.15, stdev=147.21 00:20:43.137 lat (usec): min=443, max=4532, avg=1096.82, stdev=147.57 00:20:43.137 clat percentiles (usec): 00:20:43.137 | 1.00th=[ 873], 5.00th=[ 914], 10.00th=[ 947], 20.00th=[ 988], 00:20:43.137 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1106], 00:20:43.137 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[ 1319], 00:20:43.137 | 99.00th=[ 1532], 99.50th=[ 1762], 99.90th=[ 2057], 99.95th=[ 2474], 00:20:43.137 | 99.99th=[ 4424] 00:20:43.137 bw ( KiB/s): min=194040, max=223744, per=100.00%, avg=209594.67, stdev=9906.55, samples=9 00:20:43.137 iops : min=48510, max=55936, avg=52398.67, stdev=2476.64, samples=9 00:20:43.137 lat (usec) : 500=0.01%, 750=0.01%, 1000=24.87% 00:20:43.137 lat (msec) : 2=74.98%, 4=0.11%, 10=0.02% 00:20:43.137 cpu : usr=32.33%, sys=66.87%, ctx=13, majf=0, minf=763 00:20:43.137 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:43.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.137 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:43.137 issued rwts: total=0,258706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:43.137 00:20:43.137 Run status group 0 (all jobs): 00:20:43.137 WRITE: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=1011MiB (1060MB), run=5002-5002msec 00:20:43.705 ----------------------------------------------------- 00:20:43.705 Suppressions used: 00:20:43.705 count bytes template 00:20:43.705 1 11 /usr/src/fio/parse.c 00:20:43.705 1 8 libtcmalloc_minimal.so 00:20:43.705 1 904 libcrypto.so 00:20:43.705 ----------------------------------------------------- 00:20:43.705 00:20:43.705 00:20:43.705 real 0m15.064s 00:20:43.705 user 0m7.216s 00:20:43.705 sys 0m7.487s 00:20:43.705 15:32:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.705 15:32:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:43.705 ************************************ 00:20:43.705 END TEST xnvme_fio_plugin 00:20:43.705 ************************************ 00:20:43.705 15:32:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:43.705 15:32:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:43.705 15:32:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:43.705 15:32:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:43.705 15:32:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.705 15:32:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.705 15:32:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:43.705 ************************************ 00:20:43.705 START TEST xnvme_rpc 00:20:43.705 ************************************ 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72091 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72091 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72091 ']' 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.705 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.706 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.706 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.706 15:32:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 [2024-11-20 15:32:29.792831] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:43.965 [2024-11-20 15:32:29.793006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:20:44.224 [2024-11-20 15:32:29.975213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.224 [2024-11-20 15:32:30.093427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 xnvme_bdev 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.162 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72091 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72091 ']' 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72091 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72091 00:20:45.421 killing process with pid 72091 00:20:45.421 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.422 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.422 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72091' 00:20:45.422 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72091 00:20:45.422 15:32:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72091 00:20:47.978 00:20:47.978 real 0m4.026s 00:20:47.978 user 0m4.116s 00:20:47.978 sys 0m0.521s 00:20:47.978 ************************************ 00:20:47.978 END TEST xnvme_rpc 00:20:47.978 ************************************ 00:20:47.978 15:32:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.978 15:32:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.978 15:32:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:47.978 15:32:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.978 15:32:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.978 15:32:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:47.978 ************************************ 00:20:47.978 START TEST xnvme_bdevperf 00:20:47.978 ************************************ 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:47.979 15:32:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:47.979 { 00:20:47.979 "subsystems": [ 00:20:47.979 { 00:20:47.979 "subsystem": "bdev", 00:20:47.979 "config": [ 00:20:47.979 { 00:20:47.979 "params": { 00:20:47.979 "io_mechanism": "io_uring", 00:20:47.979 "conserve_cpu": true, 00:20:47.979 "filename": "/dev/nvme0n1", 00:20:47.979 "name": "xnvme_bdev" 00:20:47.979 }, 00:20:47.979 "method": "bdev_xnvme_create" 00:20:47.979 }, 00:20:47.979 { 00:20:47.979 "method": "bdev_wait_for_examine" 00:20:47.979 } 00:20:47.979 ] 00:20:47.979 } 00:20:47.979 ] 00:20:47.979 } 00:20:47.979 [2024-11-20 15:32:33.862476] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:47.979 [2024-11-20 15:32:33.862867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:20:48.237 [2024-11-20 15:32:34.054741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.237 [2024-11-20 15:32:34.169563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.804 Running I/O for 5 seconds... 00:20:50.674 47842.00 IOPS, 186.88 MiB/s [2024-11-20T15:32:37.566Z] 49055.50 IOPS, 191.62 MiB/s [2024-11-20T15:32:38.943Z] 49232.33 IOPS, 192.31 MiB/s [2024-11-20T15:32:39.880Z] 49904.00 IOPS, 194.94 MiB/s [2024-11-20T15:32:39.880Z] 50162.80 IOPS, 195.95 MiB/s 00:20:53.922 Latency(us) 00:20:53.922 [2024-11-20T15:32:39.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.922 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:53.922 xnvme_bdev : 5.00 50129.81 195.82 0.00 0.00 1272.07 103.38 12857.54 00:20:53.922 [2024-11-20T15:32:39.880Z] =================================================================================================================== 00:20:53.922 [2024-11-20T15:32:39.880Z] Total : 50129.81 195.82 0.00 0.00 1272.07 103.38 12857.54 00:20:54.858 15:32:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:54.858 15:32:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:54.858 15:32:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:54.858 15:32:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:54.858 15:32:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:54.858 { 00:20:54.858 "subsystems": [ 00:20:54.858 { 00:20:54.858 "subsystem": "bdev", 00:20:54.858 "config": [ 00:20:54.858 { 00:20:54.858 "params": { 00:20:54.858 "io_mechanism": "io_uring", 00:20:54.858 "conserve_cpu": true, 00:20:54.858 "filename": "/dev/nvme0n1", 00:20:54.858 "name": "xnvme_bdev" 00:20:54.858 }, 00:20:54.858 "method": "bdev_xnvme_create" 00:20:54.858 }, 00:20:54.858 { 00:20:54.858 "method": "bdev_wait_for_examine" 00:20:54.858 } 00:20:54.858 ] 00:20:54.858 } 00:20:54.858 ] 00:20:54.858 } 00:20:54.858 [2024-11-20 15:32:40.796270] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:54.858 [2024-11-20 15:32:40.796441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72253 ] 00:20:55.117 [2024-11-20 15:32:40.987130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.376 [2024-11-20 15:32:41.102001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.635 Running I/O for 5 seconds... 00:20:57.948 44672.00 IOPS, 174.50 MiB/s [2024-11-20T15:32:44.475Z] 44839.50 IOPS, 175.15 MiB/s [2024-11-20T15:32:45.853Z] 45063.67 IOPS, 176.03 MiB/s [2024-11-20T15:32:46.833Z] 45090.25 IOPS, 176.13 MiB/s 00:21:00.875 Latency(us) 00:21:00.875 [2024-11-20T15:32:46.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.875 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:00.875 xnvme_bdev : 5.00 44982.40 175.71 0.00 0.00 1418.10 88.75 6397.56 00:21:00.875 [2024-11-20T15:32:46.833Z] =================================================================================================================== 00:21:00.875 [2024-11-20T15:32:46.833Z] Total : 44982.40 175.71 0.00 0.00 1418.10 88.75 6397.56 00:21:01.838 00:21:01.838 real 0m13.912s 00:21:01.838 user 0m6.971s 00:21:01.838 sys 0m6.437s 00:21:01.838 15:32:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.838 15:32:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:01.838 ************************************ 00:21:01.838 END TEST xnvme_bdevperf 00:21:01.838 ************************************ 00:21:01.838 15:32:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:01.838 15:32:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:01.838 15:32:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.838 15:32:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:01.838 ************************************ 00:21:01.838 START TEST xnvme_fio_plugin 00:21:01.838 ************************************ 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:01.838 15:32:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:01.838 { 00:21:01.838 "subsystems": [ 00:21:01.838 { 00:21:01.838 "subsystem": "bdev", 00:21:01.838 "config": [ 00:21:01.838 { 00:21:01.838 "params": { 00:21:01.838 "io_mechanism": "io_uring", 00:21:01.838 "conserve_cpu": true, 00:21:01.838 "filename": "/dev/nvme0n1", 00:21:01.838 "name": "xnvme_bdev" 00:21:01.838 }, 00:21:01.838 "method": "bdev_xnvme_create" 00:21:01.838 }, 00:21:01.838 { 00:21:01.838 "method": "bdev_wait_for_examine" 00:21:01.838 } 00:21:01.838 ] 00:21:01.838 } 00:21:01.838 ] 00:21:01.838 } 00:21:02.098 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:02.098 fio-3.35 00:21:02.098 Starting 1 thread 00:21:08.666 00:21:08.666 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72372: Wed Nov 20 15:32:53 2024 00:21:08.666 read: IOPS=49.4k, BW=193MiB/s (202MB/s)(965MiB/5001msec) 00:21:08.666 slat (nsec): min=2513, max=121054, avg=3465.33, stdev=981.66 00:21:08.666 clat (usec): min=763, max=4231, avg=1159.26, stdev=132.58 00:21:08.666 lat (usec): min=767, max=4237, avg=1162.73, stdev=132.75 00:21:08.666 clat percentiles (usec): 00:21:08.666 | 1.00th=[ 914], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1057], 00:21:08.666 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:21:08.666 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1352], 00:21:08.666 | 99.00th=[ 1467], 99.50th=[ 1680], 99.90th=[ 2008], 99.95th=[ 2311], 00:21:08.666 | 99.99th=[ 4146] 00:21:08.666 bw ( KiB/s): min=188416, max=205312, per=99.13%, avg=195953.78, stdev=5906.21, samples=9 00:21:08.666 iops : min=47104, max=51328, avg=48988.44, stdev=1476.55, samples=9 00:21:08.666 lat (usec) : 1000=8.22% 00:21:08.666 lat (msec) : 2=91.67%, 4=0.09%, 10=0.02% 00:21:08.666 cpu : usr=39.30%, sys=57.44%, ctx=11, majf=0, minf=762 00:21:08.666 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:08.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.666 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:08.666 issued rwts: total=247136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.666 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:08.666 00:21:08.666 Run status group 0 (all jobs): 00:21:08.666 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=965MiB (1012MB), run=5001-5001msec 00:21:09.234 ----------------------------------------------------- 00:21:09.234 Suppressions used: 00:21:09.234 count bytes template 00:21:09.234 1 11 /usr/src/fio/parse.c 00:21:09.234 1 8 libtcmalloc_minimal.so 00:21:09.234 1 904 libcrypto.so 00:21:09.234 ----------------------------------------------------- 00:21:09.234 00:21:09.493 15:32:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:09.493 15:32:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:09.493 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:09.493 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:09.494 15:32:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:09.494 { 00:21:09.494 "subsystems": [ 00:21:09.494 { 00:21:09.494 "subsystem": "bdev", 00:21:09.494 "config": [ 00:21:09.494 { 00:21:09.494 "params": { 00:21:09.494 "io_mechanism": "io_uring", 00:21:09.494 "conserve_cpu": true, 00:21:09.494 "filename": "/dev/nvme0n1", 00:21:09.494 "name": "xnvme_bdev" 00:21:09.494 }, 00:21:09.494 "method": "bdev_xnvme_create" 00:21:09.494 }, 00:21:09.494 { 00:21:09.494 "method": "bdev_wait_for_examine" 00:21:09.494 } 00:21:09.494 ] 00:21:09.494 } 00:21:09.494 ] 00:21:09.494 } 00:21:09.753 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:09.753 fio-3.35 00:21:09.753 Starting 1 thread 00:21:16.322 00:21:16.322 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72479: Wed Nov 20 15:33:01 2024 00:21:16.322 write: IOPS=51.2k, BW=200MiB/s (210MB/s)(1000MiB/5001msec); 0 zone resets 00:21:16.322 slat (nsec): min=2869, max=44686, avg=3658.57, stdev=990.22 00:21:16.322 clat (usec): min=795, max=6911, avg=1106.64, stdev=127.29 00:21:16.322 lat (usec): min=798, max=6914, avg=1110.30, stdev=127.54 00:21:16.322 clat percentiles (usec): 00:21:16.322 | 1.00th=[ 881], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1012], 00:21:16.322 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:21:16.322 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:21:16.322 | 99.00th=[ 1418], 99.50th=[ 1565], 99.90th=[ 1958], 99.95th=[ 2835], 00:21:16.322 | 99.99th=[ 3228] 00:21:16.322 bw ( KiB/s): min=189952, max=211456, per=99.63%, avg=203942.22, stdev=7956.09, samples=9 00:21:16.322 iops : min=47488, max=52864, avg=50985.56, stdev=1989.02, samples=9 00:21:16.322 lat (usec) : 1000=17.94% 00:21:16.322 lat (msec) : 2=81.97%, 4=0.09%, 10=0.01% 00:21:16.322 cpu : usr=40.36%, sys=56.38%, ctx=13, majf=0, minf=763 00:21:16.322 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:16.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.322 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:16.322 issued rwts: total=0,255931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:16.322 00:21:16.322 Run status group 0 (all jobs): 00:21:16.322 WRITE: bw=200MiB/s (210MB/s), 200MiB/s-200MiB/s (210MB/s-210MB/s), io=1000MiB (1048MB), run=5001-5001msec 00:21:16.889 ----------------------------------------------------- 00:21:16.889 Suppressions used: 00:21:16.889 count bytes template 00:21:16.889 1 11 /usr/src/fio/parse.c 00:21:16.889 1 8 libtcmalloc_minimal.so 00:21:16.889 1 904 libcrypto.so 00:21:16.889 ----------------------------------------------------- 00:21:16.889 00:21:16.889 ************************************ 00:21:16.889 END TEST xnvme_fio_plugin 00:21:16.889 ************************************ 00:21:16.889 00:21:16.889 real 0m14.943s 00:21:16.889 user 0m7.882s 00:21:16.889 sys 0m6.449s 00:21:16.889 15:33:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.889 15:33:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:16.889 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:16.890 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:16.890 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:16.890 15:33:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:16.890 15:33:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:16.890 15:33:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.890 15:33:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.890 ************************************ 00:21:16.890 START TEST xnvme_rpc 00:21:16.890 ************************************ 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72561 00:21:16.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72561 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72561 ']' 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.890 15:33:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:16.890 [2024-11-20 15:33:02.827755] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:16.890 [2024-11-20 15:33:02.828040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72561 ] 00:21:17.148 [2024-11-20 15:33:03.010299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.406 [2024-11-20 15:33:03.183667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 xnvme_bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72561 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72561 ']' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72561 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.344 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72561 00:21:18.603 killing process with pid 72561 00:21:18.603 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.603 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.603 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72561' 00:21:18.603 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72561 00:21:18.603 15:33:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72561 00:21:21.136 00:21:21.136 real 0m3.887s 00:21:21.136 user 0m4.037s 00:21:21.136 sys 0m0.555s 00:21:21.136 ************************************ 00:21:21.136 END TEST xnvme_rpc 00:21:21.136 ************************************ 00:21:21.136 15:33:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.136 15:33:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.136 15:33:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:21.136 15:33:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.136 15:33:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.136 15:33:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:21.136 ************************************ 00:21:21.136 START TEST xnvme_bdevperf 00:21:21.136 ************************************ 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:21.136 15:33:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:21.136 { 00:21:21.136 "subsystems": [ 00:21:21.136 { 00:21:21.136 "subsystem": "bdev", 00:21:21.136 "config": [ 00:21:21.136 { 00:21:21.136 "params": { 00:21:21.136 "io_mechanism": "io_uring_cmd", 00:21:21.136 "conserve_cpu": false, 00:21:21.136 "filename": "/dev/ng0n1", 00:21:21.136 "name": "xnvme_bdev" 00:21:21.136 }, 00:21:21.136 "method": "bdev_xnvme_create" 00:21:21.136 }, 00:21:21.136 { 00:21:21.136 "method": "bdev_wait_for_examine" 00:21:21.136 } 00:21:21.136 ] 00:21:21.136 } 00:21:21.136 ] 00:21:21.136 } 00:21:21.136 [2024-11-20 15:33:06.803021] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:21.136 [2024-11-20 15:33:06.803190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72646 ] 00:21:21.136 [2024-11-20 15:33:06.994241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.394 [2024-11-20 15:33:07.101672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.651 Running I/O for 5 seconds... 00:21:23.522 52800.00 IOPS, 206.25 MiB/s [2024-11-20T15:33:10.858Z] 52864.00 IOPS, 206.50 MiB/s [2024-11-20T15:33:11.796Z] 52160.00 IOPS, 203.75 MiB/s [2024-11-20T15:33:12.733Z] 52304.00 IOPS, 204.31 MiB/s 00:21:26.775 Latency(us) 00:21:26.775 [2024-11-20T15:33:12.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.775 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:26.775 xnvme_bdev : 5.00 52869.51 206.52 0.00 0.00 1206.74 803.60 3682.50 00:21:26.775 [2024-11-20T15:33:12.733Z] =================================================================================================================== 00:21:26.775 [2024-11-20T15:33:12.733Z] Total : 52869.51 206.52 0.00 0.00 1206.74 803.60 3682.50 00:21:27.713 15:33:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:27.714 15:33:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:27.714 15:33:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:27.714 15:33:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:27.714 15:33:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:27.714 { 00:21:27.714 "subsystems": [ 00:21:27.714 { 00:21:27.714 "subsystem": "bdev", 00:21:27.714 "config": [ 00:21:27.714 { 00:21:27.714 "params": { 00:21:27.714 "io_mechanism": "io_uring_cmd", 00:21:27.714 "conserve_cpu": false, 00:21:27.714 "filename": "/dev/ng0n1", 00:21:27.714 "name": "xnvme_bdev" 00:21:27.714 }, 00:21:27.714 "method": "bdev_xnvme_create" 00:21:27.714 }, 00:21:27.714 { 00:21:27.714 "method": "bdev_wait_for_examine" 00:21:27.714 } 00:21:27.714 ] 00:21:27.714 } 00:21:27.714 ] 00:21:27.714 } 00:21:27.714 [2024-11-20 15:33:13.636354] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:27.714 [2024-11-20 15:33:13.636799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72724 ] 00:21:27.973 [2024-11-20 15:33:13.803795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.973 [2024-11-20 15:33:13.908730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.539 Running I/O for 5 seconds... 00:21:30.412 49664.00 IOPS, 194.00 MiB/s [2024-11-20T15:33:17.308Z] 50272.00 IOPS, 196.38 MiB/s [2024-11-20T15:33:18.245Z] 51221.33 IOPS, 200.08 MiB/s [2024-11-20T15:33:19.624Z] 50640.00 IOPS, 197.81 MiB/s 00:21:33.666 Latency(us) 00:21:33.666 [2024-11-20T15:33:19.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.666 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:33.666 xnvme_bdev : 5.00 50824.53 198.53 0.00 0.00 1255.02 862.11 3573.27 00:21:33.666 [2024-11-20T15:33:19.624Z] =================================================================================================================== 00:21:33.666 [2024-11-20T15:33:19.624Z] Total : 50824.53 198.53 0.00 0.00 1255.02 862.11 3573.27 00:21:34.604 15:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:34.604 15:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:34.604 15:33:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:34.604 15:33:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:34.604 15:33:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.604 { 00:21:34.604 "subsystems": [ 00:21:34.604 { 00:21:34.604 "subsystem": "bdev", 00:21:34.604 "config": [ 00:21:34.604 { 00:21:34.604 "params": { 00:21:34.604 "io_mechanism": "io_uring_cmd", 00:21:34.604 "conserve_cpu": false, 00:21:34.604 "filename": "/dev/ng0n1", 00:21:34.604 "name": "xnvme_bdev" 00:21:34.604 }, 00:21:34.604 "method": "bdev_xnvme_create" 00:21:34.604 }, 00:21:34.604 { 00:21:34.604 "method": "bdev_wait_for_examine" 00:21:34.604 } 00:21:34.604 ] 00:21:34.604 } 00:21:34.604 ] 00:21:34.604 } 00:21:34.604 [2024-11-20 15:33:20.455813] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:34.604 [2024-11-20 15:33:20.455973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72807 ] 00:21:34.863 [2024-11-20 15:33:20.641847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.863 [2024-11-20 15:33:20.753236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.431 Running I/O for 5 seconds... 00:21:37.358 98944.00 IOPS, 386.50 MiB/s [2024-11-20T15:33:24.251Z] 99040.00 IOPS, 386.88 MiB/s [2024-11-20T15:33:25.188Z] 97152.00 IOPS, 379.50 MiB/s [2024-11-20T15:33:26.126Z] 97200.00 IOPS, 379.69 MiB/s 00:21:40.168 Latency(us) 00:21:40.168 [2024-11-20T15:33:26.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.168 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:40.168 xnvme_bdev : 5.00 97432.49 380.60 0.00 0.00 654.12 395.95 2059.70 00:21:40.168 [2024-11-20T15:33:26.126Z] =================================================================================================================== 00:21:40.168 [2024-11-20T15:33:26.126Z] Total : 97432.49 380.60 0.00 0.00 654.12 395.95 2059.70 00:21:41.567 15:33:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:41.567 15:33:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:41.567 15:33:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:41.567 15:33:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:41.567 15:33:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.567 { 00:21:41.567 "subsystems": [ 00:21:41.567 { 00:21:41.567 "subsystem": "bdev", 00:21:41.567 "config": [ 00:21:41.567 { 00:21:41.567 "params": { 00:21:41.567 "io_mechanism": "io_uring_cmd", 00:21:41.567 "conserve_cpu": false, 00:21:41.567 "filename": "/dev/ng0n1", 00:21:41.567 "name": "xnvme_bdev" 00:21:41.567 }, 00:21:41.567 "method": "bdev_xnvme_create" 00:21:41.567 }, 00:21:41.567 { 00:21:41.567 "method": "bdev_wait_for_examine" 00:21:41.567 } 00:21:41.567 ] 00:21:41.567 } 00:21:41.567 ] 00:21:41.567 } 00:21:41.567 [2024-11-20 15:33:27.325819] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:41.567 [2024-11-20 15:33:27.326044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72887 ] 00:21:41.567 [2024-11-20 15:33:27.517031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.825 [2024-11-20 15:33:27.621371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.083 Running I/O for 5 seconds... 00:21:44.399 49970.00 IOPS, 195.20 MiB/s [2024-11-20T15:33:31.291Z] 51731.50 IOPS, 202.08 MiB/s [2024-11-20T15:33:32.225Z] 51498.33 IOPS, 201.17 MiB/s [2024-11-20T15:33:33.164Z] 51374.00 IOPS, 200.68 MiB/s 00:21:47.206 Latency(us) 00:21:47.206 [2024-11-20T15:33:33.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.206 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:47.206 xnvme_bdev : 5.00 51026.67 199.32 0.00 0.00 1250.27 331.58 11047.50 00:21:47.206 [2024-11-20T15:33:33.164Z] =================================================================================================================== 00:21:47.206 [2024-11-20T15:33:33.164Z] Total : 51026.67 199.32 0.00 0.00 1250.27 331.58 11047.50 00:21:48.585 00:21:48.585 real 0m27.614s 00:21:48.585 user 0m14.284s 00:21:48.585 sys 0m12.960s 00:21:48.585 15:33:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.585 15:33:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:48.585 ************************************ 00:21:48.585 END TEST xnvme_bdevperf 00:21:48.585 ************************************ 00:21:48.585 15:33:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:48.585 15:33:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:48.585 15:33:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.585 15:33:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.585 ************************************ 00:21:48.585 START TEST xnvme_fio_plugin 00:21:48.585 ************************************ 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:48.585 15:33:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:48.585 { 00:21:48.585 "subsystems": [ 00:21:48.585 { 00:21:48.585 "subsystem": "bdev", 00:21:48.585 "config": [ 00:21:48.585 { 00:21:48.585 "params": { 00:21:48.585 "io_mechanism": "io_uring_cmd", 00:21:48.585 "conserve_cpu": false, 00:21:48.585 "filename": "/dev/ng0n1", 00:21:48.585 "name": "xnvme_bdev" 00:21:48.585 }, 00:21:48.585 "method": "bdev_xnvme_create" 00:21:48.585 }, 00:21:48.585 { 00:21:48.585 "method": "bdev_wait_for_examine" 00:21:48.585 } 00:21:48.585 ] 00:21:48.585 } 00:21:48.585 ] 00:21:48.585 } 00:21:48.845 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:48.845 fio-3.35 00:21:48.845 Starting 1 thread 00:21:55.414 00:21:55.414 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73011: Wed Nov 20 15:33:40 2024 00:21:55.414 read: IOPS=48.9k, BW=191MiB/s (201MB/s)(956MiB/5001msec) 00:21:55.414 slat (nsec): min=2940, max=93687, avg=3900.22, stdev=964.71 00:21:55.414 clat (usec): min=836, max=3024, avg=1154.14, stdev=113.25 00:21:55.414 lat (usec): min=840, max=3056, avg=1158.04, stdev=113.45 00:21:55.414 clat percentiles (usec): 00:21:55.414 | 1.00th=[ 930], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1057], 00:21:55.414 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:21:55.414 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:21:55.414 | 99.00th=[ 1418], 99.50th=[ 1549], 99.90th=[ 1909], 99.95th=[ 1991], 00:21:55.414 | 99.99th=[ 2737] 00:21:55.414 bw ( KiB/s): min=183296, max=201216, per=99.37%, avg=194560.00, stdev=6456.07, samples=9 00:21:55.414 iops : min=45824, max=50304, avg=48640.00, stdev=1614.02, samples=9 00:21:55.414 lat (usec) : 1000=6.57% 00:21:55.414 lat (msec) : 2=93.39%, 4=0.04% 00:21:55.414 cpu : usr=33.56%, sys=65.70%, ctx=13, majf=0, minf=762 00:21:55.414 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:55.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.414 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:55.414 issued rwts: total=244800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.414 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:55.414 00:21:55.414 Run status group 0 (all jobs): 00:21:55.414 READ: bw=191MiB/s (201MB/s), 191MiB/s-191MiB/s (201MB/s-201MB/s), io=956MiB (1003MB), run=5001-5001msec 00:21:55.983 ----------------------------------------------------- 00:21:55.983 Suppressions used: 00:21:55.983 count bytes template 00:21:55.983 1 11 /usr/src/fio/parse.c 00:21:55.983 1 8 libtcmalloc_minimal.so 00:21:55.983 1 904 libcrypto.so 00:21:55.983 ----------------------------------------------------- 00:21:55.983 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.983 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:55.984 15:33:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:55.984 { 00:21:55.984 "subsystems": [ 00:21:55.984 { 00:21:55.984 "subsystem": "bdev", 00:21:55.984 "config": [ 00:21:55.984 { 00:21:55.984 "params": { 00:21:55.984 "io_mechanism": "io_uring_cmd", 00:21:55.984 "conserve_cpu": false, 00:21:55.984 "filename": "/dev/ng0n1", 00:21:55.984 "name": "xnvme_bdev" 00:21:55.984 }, 00:21:55.984 "method": "bdev_xnvme_create" 00:21:55.984 }, 00:21:55.984 { 00:21:55.984 "method": "bdev_wait_for_examine" 00:21:55.984 } 00:21:55.984 ] 00:21:55.984 } 00:21:55.984 ] 00:21:55.984 } 00:21:56.244 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:56.244 fio-3.35 00:21:56.244 Starting 1 thread 00:22:02.835 00:22:02.835 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73102: Wed Nov 20 15:33:47 2024 00:22:02.835 write: IOPS=47.1k, BW=184MiB/s (193MB/s)(921MiB/5001msec); 0 zone resets 00:22:02.835 slat (nsec): min=2959, max=81816, avg=4563.79, stdev=1661.86 00:22:02.835 clat (usec): min=178, max=2699, avg=1179.86, stdev=173.05 00:22:02.835 lat (usec): min=182, max=2781, avg=1184.42, stdev=173.82 00:22:02.835 clat percentiles (usec): 00:22:02.835 | 1.00th=[ 930], 5.00th=[ 971], 10.00th=[ 1004], 20.00th=[ 1045], 00:22:02.835 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:22:02.835 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1369], 95.00th=[ 1516], 00:22:02.835 | 99.00th=[ 1844], 99.50th=[ 1909], 99.90th=[ 2057], 99.95th=[ 2147], 00:22:02.835 | 99.99th=[ 2474] 00:22:02.835 bw ( KiB/s): min=177152, max=197632, per=99.99%, avg=188537.78, stdev=7272.67, samples=9 00:22:02.835 iops : min=44288, max=49408, avg=47134.44, stdev=1818.17, samples=9 00:22:02.835 lat (usec) : 250=0.01%, 500=0.01%, 1000=10.00% 00:22:02.835 lat (msec) : 2=89.83%, 4=0.16% 00:22:02.835 cpu : usr=36.72%, sys=62.40%, ctx=14, majf=0, minf=763 00:22:02.835 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:02.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.835 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:02.835 issued rwts: total=0,235731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.835 00:22:02.835 Run status group 0 (all jobs): 00:22:02.835 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=921MiB (966MB), run=5001-5001msec 00:22:03.776 ----------------------------------------------------- 00:22:03.776 Suppressions used: 00:22:03.776 count bytes template 00:22:03.776 1 11 /usr/src/fio/parse.c 00:22:03.776 1 8 libtcmalloc_minimal.so 00:22:03.776 1 904 libcrypto.so 00:22:03.776 ----------------------------------------------------- 00:22:03.776 00:22:03.776 00:22:03.776 real 0m15.052s 00:22:03.776 user 0m7.494s 00:22:03.776 sys 0m7.196s 00:22:03.776 15:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.776 15:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:03.776 ************************************ 00:22:03.776 END TEST xnvme_fio_plugin 00:22:03.776 ************************************ 00:22:03.776 15:33:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:03.776 15:33:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:03.776 15:33:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:03.776 15:33:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:03.776 15:33:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:03.776 15:33:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.776 15:33:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:03.776 ************************************ 00:22:03.776 START TEST xnvme_rpc 00:22:03.776 ************************************ 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73187 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73187 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73187 ']' 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.776 15:33:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:03.776 [2024-11-20 15:33:49.616177] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:03.776 [2024-11-20 15:33:49.616353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73187 ] 00:22:04.035 [2024-11-20 15:33:49.795403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.035 [2024-11-20 15:33:49.903433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:04.973 xnvme_bdev 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:04.973 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73187 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73187 ']' 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73187 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.232 15:33:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73187 00:22:05.232 15:33:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.232 15:33:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.232 killing process with pid 73187 00:22:05.232 15:33:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73187' 00:22:05.232 15:33:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73187 00:22:05.232 15:33:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73187 00:22:07.769 00:22:07.769 real 0m3.908s 00:22:07.769 user 0m3.991s 00:22:07.769 sys 0m0.540s 00:22:07.769 15:33:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.769 15:33:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 ************************************ 00:22:07.769 END TEST xnvme_rpc 00:22:07.769 ************************************ 00:22:07.769 15:33:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:07.769 15:33:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:07.769 15:33:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.769 15:33:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 ************************************ 00:22:07.769 START TEST xnvme_bdevperf 00:22:07.769 ************************************ 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:07.769 15:33:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 { 00:22:07.769 "subsystems": [ 00:22:07.769 { 00:22:07.769 "subsystem": "bdev", 00:22:07.769 "config": [ 00:22:07.769 { 00:22:07.769 "params": { 00:22:07.769 "io_mechanism": "io_uring_cmd", 00:22:07.769 "conserve_cpu": true, 00:22:07.770 "filename": "/dev/ng0n1", 00:22:07.770 "name": "xnvme_bdev" 00:22:07.770 }, 00:22:07.770 "method": "bdev_xnvme_create" 00:22:07.770 }, 00:22:07.770 { 00:22:07.770 "method": "bdev_wait_for_examine" 00:22:07.770 } 00:22:07.770 ] 00:22:07.770 } 00:22:07.770 ] 00:22:07.770 } 00:22:07.770 [2024-11-20 15:33:53.557705] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:07.770 [2024-11-20 15:33:53.557871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73268 ] 00:22:08.028 [2024-11-20 15:33:53.749425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.028 [2024-11-20 15:33:53.858268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.288 Running I/O for 5 seconds... 00:22:10.603 46144.00 IOPS, 180.25 MiB/s [2024-11-20T15:33:57.546Z] 46848.00 IOPS, 183.00 MiB/s [2024-11-20T15:33:58.481Z] 47530.67 IOPS, 185.67 MiB/s [2024-11-20T15:33:59.414Z] 47776.00 IOPS, 186.62 MiB/s [2024-11-20T15:33:59.414Z] 47987.20 IOPS, 187.45 MiB/s 00:22:13.456 Latency(us) 00:22:13.456 [2024-11-20T15:33:59.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.456 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:13.456 xnvme_bdev : 5.01 47941.55 187.27 0.00 0.00 1330.81 846.51 4181.82 00:22:13.456 [2024-11-20T15:33:59.414Z] =================================================================================================================== 00:22:13.456 [2024-11-20T15:33:59.414Z] Total : 47941.55 187.27 0.00 0.00 1330.81 846.51 4181.82 00:22:14.831 15:34:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:14.831 15:34:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:14.831 15:34:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:14.831 15:34:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:14.831 15:34:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:14.831 { 00:22:14.831 "subsystems": [ 00:22:14.831 { 00:22:14.831 "subsystem": "bdev", 00:22:14.831 "config": [ 00:22:14.831 { 00:22:14.831 "params": { 00:22:14.831 "io_mechanism": "io_uring_cmd", 00:22:14.831 "conserve_cpu": true, 00:22:14.831 "filename": "/dev/ng0n1", 00:22:14.831 "name": "xnvme_bdev" 00:22:14.831 }, 00:22:14.831 "method": "bdev_xnvme_create" 00:22:14.831 }, 00:22:14.831 { 00:22:14.831 "method": "bdev_wait_for_examine" 00:22:14.831 } 00:22:14.831 ] 00:22:14.831 } 00:22:14.831 ] 00:22:14.831 } 00:22:14.831 [2024-11-20 15:34:00.481159] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:14.831 [2024-11-20 15:34:00.481293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73348 ] 00:22:14.831 [2024-11-20 15:34:00.650253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.831 [2024-11-20 15:34:00.762001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.414 Running I/O for 5 seconds... 00:22:17.283 48336.00 IOPS, 188.81 MiB/s [2024-11-20T15:34:04.175Z] 48516.00 IOPS, 189.52 MiB/s [2024-11-20T15:34:05.552Z] 48918.67 IOPS, 191.09 MiB/s [2024-11-20T15:34:06.119Z] 48577.00 IOPS, 189.75 MiB/s [2024-11-20T15:34:06.377Z] 48180.00 IOPS, 188.20 MiB/s 00:22:20.419 Latency(us) 00:22:20.419 [2024-11-20T15:34:06.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.419 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:20.419 xnvme_bdev : 5.01 48100.98 187.89 0.00 0.00 1325.76 73.63 7302.58 00:22:20.419 [2024-11-20T15:34:06.377Z] =================================================================================================================== 00:22:20.419 [2024-11-20T15:34:06.377Z] Total : 48100.98 187.89 0.00 0.00 1325.76 73.63 7302.58 00:22:21.796 15:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:21.796 15:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:21.796 15:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:21.796 15:34:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:21.796 15:34:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:21.796 { 00:22:21.796 "subsystems": [ 00:22:21.796 { 00:22:21.796 "subsystem": "bdev", 00:22:21.796 "config": [ 00:22:21.796 { 00:22:21.796 "params": { 00:22:21.796 "io_mechanism": "io_uring_cmd", 00:22:21.796 "conserve_cpu": true, 00:22:21.796 "filename": "/dev/ng0n1", 00:22:21.796 "name": "xnvme_bdev" 00:22:21.796 }, 00:22:21.796 "method": "bdev_xnvme_create" 00:22:21.796 }, 00:22:21.796 { 00:22:21.796 "method": "bdev_wait_for_examine" 00:22:21.796 } 00:22:21.796 ] 00:22:21.796 } 00:22:21.796 ] 00:22:21.796 } 00:22:21.796 [2024-11-20 15:34:07.450999] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:21.796 [2024-11-20 15:34:07.451166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73429 ] 00:22:21.796 [2024-11-20 15:34:07.639422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.796 [2024-11-20 15:34:07.750930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.363 Running I/O for 5 seconds... 00:22:24.232 102592.00 IOPS, 400.75 MiB/s [2024-11-20T15:34:11.124Z] 102080.00 IOPS, 398.75 MiB/s [2024-11-20T15:34:12.497Z] 102272.00 IOPS, 399.50 MiB/s [2024-11-20T15:34:13.432Z] 101840.00 IOPS, 397.81 MiB/s [2024-11-20T15:34:13.432Z] 101977.60 IOPS, 398.35 MiB/s 00:22:27.474 Latency(us) 00:22:27.474 [2024-11-20T15:34:13.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.474 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:22:27.474 xnvme_bdev : 5.00 101952.33 398.25 0.00 0.00 625.10 354.99 3354.82 00:22:27.474 [2024-11-20T15:34:13.432Z] =================================================================================================================== 00:22:27.474 [2024-11-20T15:34:13.432Z] Total : 101952.33 398.25 0.00 0.00 625.10 354.99 3354.82 00:22:28.417 15:34:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:28.417 15:34:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:22:28.417 15:34:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:28.417 15:34:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:28.417 15:34:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:28.417 { 00:22:28.417 "subsystems": [ 00:22:28.417 { 00:22:28.417 "subsystem": "bdev", 00:22:28.417 "config": [ 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "io_mechanism": "io_uring_cmd", 00:22:28.417 "conserve_cpu": true, 00:22:28.417 "filename": "/dev/ng0n1", 00:22:28.417 "name": "xnvme_bdev" 00:22:28.417 }, 00:22:28.417 "method": "bdev_xnvme_create" 00:22:28.417 }, 00:22:28.417 { 00:22:28.417 "method": "bdev_wait_for_examine" 00:22:28.417 } 00:22:28.417 ] 00:22:28.417 } 00:22:28.417 ] 00:22:28.417 } 00:22:28.417 [2024-11-20 15:34:14.318951] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:28.417 [2024-11-20 15:34:14.319122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73503 ] 00:22:28.702 [2024-11-20 15:34:14.489724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.702 [2024-11-20 15:34:14.596544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.282 Running I/O for 5 seconds... 00:22:31.152 40153.00 IOPS, 156.85 MiB/s [2024-11-20T15:34:18.043Z] 39478.50 IOPS, 154.21 MiB/s [2024-11-20T15:34:18.977Z] 39976.67 IOPS, 156.16 MiB/s [2024-11-20T15:34:20.353Z] 38107.75 IOPS, 148.86 MiB/s [2024-11-20T15:34:20.353Z] 34658.00 IOPS, 135.38 MiB/s 00:22:34.395 Latency(us) 00:22:34.395 [2024-11-20T15:34:20.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.395 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:22:34.395 xnvme_bdev : 5.02 34543.05 134.93 0.00 0.00 1844.26 60.46 50681.17 00:22:34.395 [2024-11-20T15:34:20.353Z] =================================================================================================================== 00:22:34.395 [2024-11-20T15:34:20.353Z] Total : 34543.05 134.93 0.00 0.00 1844.26 60.46 50681.17 00:22:35.333 00:22:35.333 real 0m27.616s 00:22:35.333 user 0m15.200s 00:22:35.333 sys 0m10.490s 00:22:35.333 15:34:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.333 ************************************ 00:22:35.333 END TEST xnvme_bdevperf 00:22:35.333 ************************************ 00:22:35.333 15:34:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:35.333 15:34:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:35.333 15:34:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:35.333 15:34:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.333 15:34:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:35.333 ************************************ 00:22:35.333 START TEST xnvme_fio_plugin 00:22:35.333 ************************************ 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:35.333 15:34:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:35.333 { 00:22:35.333 "subsystems": [ 00:22:35.333 { 00:22:35.333 "subsystem": "bdev", 00:22:35.333 "config": [ 00:22:35.333 { 00:22:35.333 "params": { 00:22:35.333 "io_mechanism": "io_uring_cmd", 00:22:35.333 "conserve_cpu": true, 00:22:35.333 "filename": "/dev/ng0n1", 00:22:35.333 "name": "xnvme_bdev" 00:22:35.333 }, 00:22:35.333 "method": "bdev_xnvme_create" 00:22:35.333 }, 00:22:35.333 { 00:22:35.333 "method": "bdev_wait_for_examine" 00:22:35.334 } 00:22:35.334 ] 00:22:35.334 } 00:22:35.334 ] 00:22:35.334 } 00:22:35.592 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:35.592 fio-3.35 00:22:35.592 Starting 1 thread 00:22:42.155 00:22:42.155 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73627: Wed Nov 20 15:34:27 2024 00:22:42.155 read: IOPS=48.8k, BW=191MiB/s (200MB/s)(953MiB/5001msec) 00:22:42.155 slat (nsec): min=2879, max=104759, avg=3919.17, stdev=963.66 00:22:42.155 clat (usec): min=849, max=3029, avg=1159.01, stdev=107.96 00:22:42.155 lat (usec): min=852, max=3134, avg=1162.93, stdev=108.13 00:22:42.155 clat percentiles (usec): 00:22:42.155 | 1.00th=[ 955], 5.00th=[ 1004], 10.00th=[ 1029], 20.00th=[ 1074], 00:22:42.155 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:22:42.155 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:22:42.155 | 99.00th=[ 1401], 99.50th=[ 1500], 99.90th=[ 1926], 99.95th=[ 2008], 00:22:42.155 | 99.99th=[ 2802] 00:22:42.155 bw ( KiB/s): min=186880, max=203776, per=100.00%, avg=195242.67, stdev=4565.13, samples=9 00:22:42.155 iops : min=46720, max=50944, avg=48810.67, stdev=1141.28, samples=9 00:22:42.155 lat (usec) : 1000=4.56% 00:22:42.155 lat (msec) : 2=95.38%, 4=0.06% 00:22:42.155 cpu : usr=35.42%, sys=62.28%, ctx=6, majf=0, minf=762 00:22:42.155 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:42.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.155 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:42.155 issued rwts: total=243904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.155 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.155 00:22:42.155 Run status group 0 (all jobs): 00:22:42.155 READ: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=953MiB (999MB), run=5001-5001msec 00:22:42.722 ----------------------------------------------------- 00:22:42.722 Suppressions used: 00:22:42.722 count bytes template 00:22:42.722 1 11 /usr/src/fio/parse.c 00:22:42.722 1 8 libtcmalloc_minimal.so 00:22:42.722 1 904 libcrypto.so 00:22:42.722 ----------------------------------------------------- 00:22:42.722 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:42.722 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:42.723 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:42.723 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:42.723 15:34:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:42.723 { 00:22:42.723 "subsystems": [ 00:22:42.723 { 00:22:42.723 "subsystem": "bdev", 00:22:42.723 "config": [ 00:22:42.723 { 00:22:42.723 "params": { 00:22:42.723 "io_mechanism": "io_uring_cmd", 00:22:42.723 "conserve_cpu": true, 00:22:42.723 "filename": "/dev/ng0n1", 00:22:42.723 "name": "xnvme_bdev" 00:22:42.723 }, 00:22:42.723 "method": "bdev_xnvme_create" 00:22:42.723 }, 00:22:42.723 { 00:22:42.723 "method": "bdev_wait_for_examine" 00:22:42.723 } 00:22:42.723 ] 00:22:42.723 } 00:22:42.723 ] 00:22:42.723 } 00:22:42.982 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:42.982 fio-3.35 00:22:42.982 Starting 1 thread 00:22:49.545 00:22:49.545 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73718: Wed Nov 20 15:34:34 2024 00:22:49.545 write: IOPS=45.9k, BW=179MiB/s (188MB/s)(897MiB/5001msec); 0 zone resets 00:22:49.545 slat (usec): min=2, max=279, avg= 4.68, stdev= 3.52 00:22:49.545 clat (usec): min=91, max=10583, avg=1219.67, stdev=313.29 00:22:49.545 lat (usec): min=96, max=10587, avg=1224.35, stdev=313.80 00:22:49.545 clat percentiles (usec): 00:22:49.545 | 1.00th=[ 742], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1057], 00:22:49.545 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:22:49.545 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1467], 95.00th=[ 1713], 00:22:49.545 | 99.00th=[ 2245], 99.50th=[ 2671], 99.90th=[ 4146], 99.95th=[ 5473], 00:22:49.545 | 99.99th=[ 9110] 00:22:49.545 bw ( KiB/s): min=159904, max=195584, per=100.00%, avg=185414.22, stdev=12937.48, samples=9 00:22:49.545 iops : min=39976, max=48896, avg=46353.56, stdev=3234.37, samples=9 00:22:49.545 lat (usec) : 100=0.01%, 250=0.05%, 500=0.22%, 750=0.76%, 1000=8.13% 00:22:49.545 lat (msec) : 2=88.97%, 4=1.76%, 10=0.11%, 20=0.01% 00:22:49.545 cpu : usr=38.70%, sys=56.26%, ctx=13, majf=0, minf=763 00:22:49.545 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.6%, 16=24.0%, 32=52.5%, >=64=1.8% 00:22:49.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.545 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:49.545 issued rwts: total=0,229537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.545 00:22:49.545 Run status group 0 (all jobs): 00:22:49.545 WRITE: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=897MiB (940MB), run=5001-5001msec 00:22:50.157 ----------------------------------------------------- 00:22:50.157 Suppressions used: 00:22:50.157 count bytes template 00:22:50.157 1 11 /usr/src/fio/parse.c 00:22:50.157 1 8 libtcmalloc_minimal.so 00:22:50.157 1 904 libcrypto.so 00:22:50.157 ----------------------------------------------------- 00:22:50.157 00:22:50.157 00:22:50.157 real 0m14.954s 00:22:50.157 user 0m7.607s 00:22:50.157 sys 0m6.688s 00:22:50.157 ************************************ 00:22:50.157 END TEST xnvme_fio_plugin 00:22:50.157 ************************************ 00:22:50.157 15:34:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.157 15:34:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:50.432 15:34:36 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73187 00:22:50.432 15:34:36 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73187 ']' 00:22:50.432 15:34:36 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73187 00:22:50.432 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73187) - No such process 00:22:50.432 Process with pid 73187 is not found 00:22:50.432 15:34:36 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73187 is not found' 00:22:50.432 00:22:50.432 real 3m55.252s 00:22:50.432 user 2m3.299s 00:22:50.432 sys 1m34.615s 00:22:50.432 15:34:36 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.432 ************************************ 00:22:50.432 END TEST nvme_xnvme 00:22:50.432 ************************************ 00:22:50.432 15:34:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:50.432 15:34:36 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:50.432 15:34:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.432 15:34:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.432 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:22:50.432 ************************************ 00:22:50.432 START TEST blockdev_xnvme 00:22:50.432 ************************************ 00:22:50.432 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:50.432 * Looking for test storage... 00:22:50.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:50.432 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.432 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.432 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.432 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.432 15:34:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:50.691 15:34:36 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.691 15:34:36 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.691 15:34:36 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.691 15:34:36 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.691 --rc genhtml_branch_coverage=1 00:22:50.691 --rc genhtml_function_coverage=1 00:22:50.691 --rc genhtml_legend=1 00:22:50.691 --rc geninfo_all_blocks=1 00:22:50.691 --rc geninfo_unexecuted_blocks=1 00:22:50.691 00:22:50.691 ' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.691 --rc genhtml_branch_coverage=1 00:22:50.691 --rc genhtml_function_coverage=1 00:22:50.691 --rc genhtml_legend=1 00:22:50.691 --rc geninfo_all_blocks=1 00:22:50.691 --rc geninfo_unexecuted_blocks=1 00:22:50.691 00:22:50.691 ' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.691 --rc genhtml_branch_coverage=1 00:22:50.691 --rc genhtml_function_coverage=1 00:22:50.691 --rc genhtml_legend=1 00:22:50.691 --rc geninfo_all_blocks=1 00:22:50.691 --rc geninfo_unexecuted_blocks=1 00:22:50.691 00:22:50.691 ' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.691 --rc genhtml_branch_coverage=1 00:22:50.691 --rc genhtml_function_coverage=1 00:22:50.691 --rc genhtml_legend=1 00:22:50.691 --rc geninfo_all_blocks=1 00:22:50.691 --rc geninfo_unexecuted_blocks=1 00:22:50.691 00:22:50.691 ' 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73858 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73858 00:22:50.691 15:34:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73858 ']' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.691 15:34:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:50.691 [2024-11-20 15:34:36.500228] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:50.691 [2024-11-20 15:34:36.500586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73858 ] 00:22:50.950 [2024-11-20 15:34:36.670551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.950 [2024-11-20 15:34:36.782789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.885 15:34:37 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.885 15:34:37 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:51.885 15:34:37 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:51.885 15:34:37 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:22:51.885 15:34:37 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:51.885 15:34:37 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:51.885 15:34:37 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:52.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:53.018 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:53.018 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:53.018 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:53.018 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:53.018 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:53.018 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:53.019 nvme0n1 00:22:53.019 nvme0n2 00:22:53.019 nvme0n3 00:22:53.019 nvme1n1 00:22:53.019 nvme2n1 00:22:53.019 nvme3n1 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.019 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 15:34:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.279 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:53.279 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:53.279 15:34:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:53.279 15:34:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.279 15:34:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dceb5899-a17f-4f8e-a3d7-f28be5079734"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dceb5899-a17f-4f8e-a3d7-f28be5079734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "8c58511f-1326-4d63-b8c7-3ab3737dbdf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c58511f-1326-4d63-b8c7-3ab3737dbdf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bf5e8627-fc6d-4beb-85b9-33536b98a949"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf5e8627-fc6d-4beb-85b9-33536b98a949",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3244ba25-7ae5-4b6b-a32d-00f399a07b19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3244ba25-7ae5-4b6b-a32d-00f399a07b19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "43af9031-3533-4d23-acf7-4de5aa66f3c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "43af9031-3533-4d23-acf7-4de5aa66f3c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b119da6a-bc65-453e-9d2f-40f3adebcfc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b119da6a-bc65-453e-9d2f-40f3adebcfc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:53.279 15:34:39 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73858 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73858 ']' 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73858 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73858 00:22:53.279 killing process with pid 73858 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73858' 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73858 00:22:53.279 15:34:39 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73858 00:22:55.809 15:34:41 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:55.809 15:34:41 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:55.809 15:34:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:55.809 15:34:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.809 15:34:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.809 ************************************ 00:22:55.809 START TEST bdev_hello_world 00:22:55.809 ************************************ 00:22:55.809 15:34:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:55.809 [2024-11-20 15:34:41.597424] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:55.809 [2024-11-20 15:34:41.597619] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74153 ] 00:22:56.067 [2024-11-20 15:34:41.781958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.067 [2024-11-20 15:34:41.889089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.637 [2024-11-20 15:34:42.316044] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:56.637 [2024-11-20 15:34:42.316304] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:56.637 [2024-11-20 15:34:42.316335] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:56.637 [2024-11-20 15:34:42.318449] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:56.637 [2024-11-20 15:34:42.318813] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:56.637 [2024-11-20 15:34:42.318837] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:56.637 [2024-11-20 15:34:42.319180] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:56.637 00:22:56.637 [2024-11-20 15:34:42.319206] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:57.573 ************************************ 00:22:57.573 END TEST bdev_hello_world 00:22:57.573 ************************************ 00:22:57.573 00:22:57.573 real 0m1.934s 00:22:57.573 user 0m1.557s 00:22:57.573 sys 0m0.260s 00:22:57.573 15:34:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.573 15:34:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:57.573 15:34:43 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:57.573 15:34:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.573 15:34:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.573 15:34:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:57.573 ************************************ 00:22:57.573 START TEST bdev_bounds 00:22:57.573 ************************************ 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:57.573 Process bdevio pid: 74191 00:22:57.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74191 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74191' 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74191 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74191 ']' 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.573 15:34:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 [2024-11-20 15:34:43.595898] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:57.832 [2024-11-20 15:34:43.596347] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:22:57.832 [2024-11-20 15:34:43.787627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.091 [2024-11-20 15:34:43.909237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.091 [2024-11-20 15:34:43.909300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.091 [2024-11-20 15:34:43.909324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.659 15:34:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.659 15:34:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:58.659 15:34:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:58.659 I/O targets: 00:22:58.659 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:58.659 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:58.659 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:58.659 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:22:58.659 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:22:58.659 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:22:58.659 00:22:58.659 00:22:58.659 CUnit - A unit testing framework for C - Version 2.1-3 00:22:58.659 http://cunit.sourceforge.net/ 00:22:58.659 00:22:58.659 00:22:58.659 Suite: bdevio tests on: nvme3n1 00:22:58.659 Test: blockdev write read block ...passed 00:22:58.659 Test: blockdev write zeroes read block ...passed 00:22:58.659 Test: blockdev write zeroes read no split ...passed 00:22:58.918 Test: blockdev write zeroes read split ...passed 00:22:58.918 Test: blockdev write zeroes read split partial ...passed 00:22:58.918 Test: blockdev reset ...passed 00:22:58.918 Test: blockdev write read 8 blocks ...passed 00:22:58.918 Test: blockdev write read size > 128k ...passed 00:22:58.918 Test: blockdev write read invalid size ...passed 00:22:58.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:58.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:58.918 Test: blockdev write read max offset ...passed 00:22:58.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:58.918 Test: blockdev writev readv 8 blocks ...passed 00:22:58.918 Test: blockdev writev readv 30 x 1block ...passed 00:22:58.918 Test: blockdev writev readv block ...passed 00:22:58.918 Test: blockdev writev readv size > 128k ...passed 00:22:58.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:58.918 Test: blockdev comparev and writev ...passed 00:22:58.918 Test: blockdev nvme passthru rw ...passed 00:22:58.918 Test: blockdev nvme passthru vendor specific ...passed 00:22:58.918 Test: blockdev nvme admin passthru ...passed 00:22:58.918 Test: blockdev copy ...passed 00:22:58.918 Suite: bdevio tests on: nvme2n1 00:22:58.918 Test: blockdev write read block ...passed 00:22:58.918 Test: blockdev write zeroes read block ...passed 00:22:58.918 Test: blockdev write zeroes read no split ...passed 00:22:58.918 Test: blockdev write zeroes read split ...passed 00:22:58.918 Test: blockdev write zeroes read split partial ...passed 00:22:58.918 Test: blockdev reset ...passed 00:22:58.918 Test: blockdev write read 8 blocks ...passed 00:22:58.918 Test: blockdev write read size > 128k ...passed 00:22:58.918 Test: blockdev write read invalid size ...passed 00:22:58.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:58.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:58.918 Test: blockdev write read max offset ...passed 00:22:58.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:58.918 Test: blockdev writev readv 8 blocks ...passed 00:22:58.918 Test: blockdev writev readv 30 x 1block ...passed 00:22:58.918 Test: blockdev writev readv block ...passed 00:22:58.918 Test: blockdev writev readv size > 128k ...passed 00:22:58.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:58.918 Test: blockdev comparev and writev ...passed 00:22:58.918 Test: blockdev nvme passthru rw ...passed 00:22:58.918 Test: blockdev nvme passthru vendor specific ...passed 00:22:58.918 Test: blockdev nvme admin passthru ...passed 00:22:58.918 Test: blockdev copy ...passed 00:22:58.918 Suite: bdevio tests on: nvme1n1 00:22:58.918 Test: blockdev write read block ...passed 00:22:58.918 Test: blockdev write zeroes read block ...passed 00:22:58.918 Test: blockdev write zeroes read no split ...passed 00:22:58.918 Test: blockdev write zeroes read split ...passed 00:22:58.918 Test: blockdev write zeroes read split partial ...passed 00:22:58.918 Test: blockdev reset ...passed 00:22:58.918 Test: blockdev write read 8 blocks ...passed 00:22:58.918 Test: blockdev write read size > 128k ...passed 00:22:58.918 Test: blockdev write read invalid size ...passed 00:22:58.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:58.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:58.919 Test: blockdev write read max offset ...passed 00:22:58.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:58.919 Test: blockdev writev readv 8 blocks ...passed 00:22:58.919 Test: blockdev writev readv 30 x 1block ...passed 00:22:58.919 Test: blockdev writev readv block ...passed 00:22:58.919 Test: blockdev writev readv size > 128k ...passed 00:22:58.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:58.919 Test: blockdev comparev and writev ...passed 00:22:58.919 Test: blockdev nvme passthru rw ...passed 00:22:58.919 Test: blockdev nvme passthru vendor specific ...passed 00:22:58.919 Test: blockdev nvme admin passthru ...passed 00:22:58.919 Test: blockdev copy ...passed 00:22:58.919 Suite: bdevio tests on: nvme0n3 00:22:58.919 Test: blockdev write read block ...passed 00:22:58.919 Test: blockdev write zeroes read block ...passed 00:22:58.919 Test: blockdev write zeroes read no split ...passed 00:22:58.919 Test: blockdev write zeroes read split ...passed 00:22:59.178 Test: blockdev write zeroes read split partial ...passed 00:22:59.178 Test: blockdev reset ...passed 00:22:59.178 Test: blockdev write read 8 blocks ...passed 00:22:59.178 Test: blockdev write read size > 128k ...passed 00:22:59.178 Test: blockdev write read invalid size ...passed 00:22:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.178 Test: blockdev write read max offset ...passed 00:22:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.178 Test: blockdev writev readv 8 blocks ...passed 00:22:59.178 Test: blockdev writev readv 30 x 1block ...passed 00:22:59.178 Test: blockdev writev readv block ...passed 00:22:59.178 Test: blockdev writev readv size > 128k ...passed 00:22:59.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:59.178 Test: blockdev comparev and writev ...passed 00:22:59.178 Test: blockdev nvme passthru rw ...passed 00:22:59.178 Test: blockdev nvme passthru vendor specific ...passed 00:22:59.178 Test: blockdev nvme admin passthru ...passed 00:22:59.178 Test: blockdev copy ...passed 00:22:59.178 Suite: bdevio tests on: nvme0n2 00:22:59.178 Test: blockdev write read block ...passed 00:22:59.178 Test: blockdev write zeroes read block ...passed 00:22:59.178 Test: blockdev write zeroes read no split ...passed 00:22:59.178 Test: blockdev write zeroes read split ...passed 00:22:59.178 Test: blockdev write zeroes read split partial ...passed 00:22:59.178 Test: blockdev reset ...passed 00:22:59.178 Test: blockdev write read 8 blocks ...passed 00:22:59.178 Test: blockdev write read size > 128k ...passed 00:22:59.178 Test: blockdev write read invalid size ...passed 00:22:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.178 Test: blockdev write read max offset ...passed 00:22:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.178 Test: blockdev writev readv 8 blocks ...passed 00:22:59.178 Test: blockdev writev readv 30 x 1block ...passed 00:22:59.178 Test: blockdev writev readv block ...passed 00:22:59.178 Test: blockdev writev readv size > 128k ...passed 00:22:59.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:59.178 Test: blockdev comparev and writev ...passed 00:22:59.178 Test: blockdev nvme passthru rw ...passed 00:22:59.178 Test: blockdev nvme passthru vendor specific ...passed 00:22:59.178 Test: blockdev nvme admin passthru ...passed 00:22:59.178 Test: blockdev copy ...passed 00:22:59.178 Suite: bdevio tests on: nvme0n1 00:22:59.178 Test: blockdev write read block ...passed 00:22:59.178 Test: blockdev write zeroes read block ...passed 00:22:59.178 Test: blockdev write zeroes read no split ...passed 00:22:59.178 Test: blockdev write zeroes read split ...passed 00:22:59.178 Test: blockdev write zeroes read split partial ...passed 00:22:59.178 Test: blockdev reset ...passed 00:22:59.178 Test: blockdev write read 8 blocks ...passed 00:22:59.178 Test: blockdev write read size > 128k ...passed 00:22:59.178 Test: blockdev write read invalid size ...passed 00:22:59.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.178 Test: blockdev write read max offset ...passed 00:22:59.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.178 Test: blockdev writev readv 8 blocks ...passed 00:22:59.178 Test: blockdev writev readv 30 x 1block ...passed 00:22:59.178 Test: blockdev writev readv block ...passed 00:22:59.178 Test: blockdev writev readv size > 128k ...passed 00:22:59.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:59.178 Test: blockdev comparev and writev ...passed 00:22:59.178 Test: blockdev nvme passthru rw ...passed 00:22:59.178 Test: blockdev nvme passthru vendor specific ...passed 00:22:59.178 Test: blockdev nvme admin passthru ...passed 00:22:59.178 Test: blockdev copy ...passed 00:22:59.178 00:22:59.178 Run Summary: Type Total Ran Passed Failed Inactive 00:22:59.178 suites 6 6 n/a 0 0 00:22:59.178 tests 138 138 138 0 0 00:22:59.178 asserts 780 780 780 0 n/a 00:22:59.178 00:22:59.178 Elapsed time = 1.344 seconds 00:22:59.178 0 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74191 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74191 ']' 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74191 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74191 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.178 killing process with pid 74191 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74191' 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74191 00:22:59.178 15:34:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74191 00:23:00.556 ************************************ 00:23:00.556 END TEST bdev_bounds 00:23:00.556 ************************************ 00:23:00.556 15:34:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:00.556 00:23:00.556 real 0m2.795s 00:23:00.556 user 0m6.919s 00:23:00.556 sys 0m0.426s 00:23:00.556 15:34:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.556 15:34:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:00.556 15:34:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:00.556 15:34:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:00.556 15:34:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.556 15:34:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.556 ************************************ 00:23:00.556 START TEST bdev_nbd 00:23:00.556 ************************************ 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74256 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:00.556 15:34:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74256 /var/tmp/spdk-nbd.sock 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74256 ']' 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:00.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.557 15:34:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:00.557 [2024-11-20 15:34:46.453197] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:00.557 [2024-11-20 15:34:46.453629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.815 [2024-11-20 15:34:46.653586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.815 [2024-11-20 15:34:46.764078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.751 1+0 records in 00:23:01.751 1+0 records out 00:23:01.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589343 s, 7.0 MB/s 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:01.751 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.010 1+0 records in 00:23:02.010 1+0 records out 00:23:02.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478181 s, 8.6 MB/s 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:02.010 15:34:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.268 1+0 records in 00:23:02.268 1+0 records out 00:23:02.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487313 s, 8.4 MB/s 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:02.268 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.527 1+0 records in 00:23:02.527 1+0 records out 00:23:02.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634946 s, 6.5 MB/s 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:02.527 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.786 1+0 records in 00:23:02.786 1+0 records out 00:23:02.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520137 s, 7.9 MB/s 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:02.786 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.045 1+0 records in 00:23:03.045 1+0 records out 00:23:03.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836017 s, 4.9 MB/s 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:03.045 15:34:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd0", 00:23:03.304 "bdev_name": "nvme0n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd1", 00:23:03.304 "bdev_name": "nvme0n2" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd2", 00:23:03.304 "bdev_name": "nvme0n3" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd3", 00:23:03.304 "bdev_name": "nvme1n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd4", 00:23:03.304 "bdev_name": "nvme2n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd5", 00:23:03.304 "bdev_name": "nvme3n1" 00:23:03.304 } 00:23:03.304 ]' 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd0", 00:23:03.304 "bdev_name": "nvme0n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd1", 00:23:03.304 "bdev_name": "nvme0n2" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd2", 00:23:03.304 "bdev_name": "nvme0n3" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd3", 00:23:03.304 "bdev_name": "nvme1n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd4", 00:23:03.304 "bdev_name": "nvme2n1" 00:23:03.304 }, 00:23:03.304 { 00:23:03.304 "nbd_device": "/dev/nbd5", 00:23:03.304 "bdev_name": "nvme3n1" 00:23:03.304 } 00:23:03.304 ]' 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.304 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.562 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.563 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:03.819 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:03.819 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:03.819 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:03.819 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.820 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.078 15:34:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.078 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:04.336 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.595 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:04.853 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:04.853 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:04.853 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:05.112 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:05.112 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:05.112 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:05.112 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:05.112 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:05.113 15:34:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:05.372 /dev/nbd0 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.372 1+0 records in 00:23:05.372 1+0 records out 00:23:05.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406432 s, 10.1 MB/s 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:05.372 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:23:05.631 /dev/nbd1 00:23:05.631 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:05.631 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:05.631 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:05.631 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.631 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.632 1+0 records in 00:23:05.632 1+0 records out 00:23:05.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627555 s, 6.5 MB/s 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:23:05.632 /dev/nbd10 00:23:05.632 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.890 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.890 1+0 records in 00:23:05.890 1+0 records out 00:23:05.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693497 s, 5.9 MB/s 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:23:05.891 /dev/nbd11 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.891 1+0 records in 00:23:05.891 1+0 records out 00:23:05.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873365 s, 4.7 MB/s 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:05.891 15:34:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:23:06.150 /dev/nbd12 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.150 1+0 records in 00:23:06.150 1+0 records out 00:23:06.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769615 s, 5.3 MB/s 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.150 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:06.409 /dev/nbd13 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.409 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.409 1+0 records in 00:23:06.409 1+0 records out 00:23:06.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804266 s, 5.1 MB/s 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.669 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd0", 00:23:06.930 "bdev_name": "nvme0n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd1", 00:23:06.930 "bdev_name": "nvme0n2" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd10", 00:23:06.930 "bdev_name": "nvme0n3" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd11", 00:23:06.930 "bdev_name": "nvme1n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd12", 00:23:06.930 "bdev_name": "nvme2n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd13", 00:23:06.930 "bdev_name": "nvme3n1" 00:23:06.930 } 00:23:06.930 ]' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd0", 00:23:06.930 "bdev_name": "nvme0n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd1", 00:23:06.930 "bdev_name": "nvme0n2" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd10", 00:23:06.930 "bdev_name": "nvme0n3" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd11", 00:23:06.930 "bdev_name": "nvme1n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd12", 00:23:06.930 "bdev_name": "nvme2n1" 00:23:06.930 }, 00:23:06.930 { 00:23:06.930 "nbd_device": "/dev/nbd13", 00:23:06.930 "bdev_name": "nvme3n1" 00:23:06.930 } 00:23:06.930 ]' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:06.930 /dev/nbd1 00:23:06.930 /dev/nbd10 00:23:06.930 /dev/nbd11 00:23:06.930 /dev/nbd12 00:23:06.930 /dev/nbd13' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:06.930 /dev/nbd1 00:23:06.930 /dev/nbd10 00:23:06.930 /dev/nbd11 00:23:06.930 /dev/nbd12 00:23:06.930 /dev/nbd13' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:06.930 256+0 records in 00:23:06.930 256+0 records out 00:23:06.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652935 s, 161 MB/s 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:06.930 256+0 records in 00:23:06.930 256+0 records out 00:23:06.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121296 s, 8.6 MB/s 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.930 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:07.190 256+0 records in 00:23:07.190 256+0 records out 00:23:07.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124871 s, 8.4 MB/s 00:23:07.190 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.190 15:34:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:07.190 256+0 records in 00:23:07.190 256+0 records out 00:23:07.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124916 s, 8.4 MB/s 00:23:07.190 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.190 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:07.449 256+0 records in 00:23:07.449 256+0 records out 00:23:07.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122531 s, 8.6 MB/s 00:23:07.449 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.449 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:07.449 256+0 records in 00:23:07.449 256+0 records out 00:23:07.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125617 s, 8.3 MB/s 00:23:07.449 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.449 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:07.710 256+0 records in 00:23:07.710 256+0 records out 00:23:07.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143748 s, 7.3 MB/s 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.710 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.969 15:34:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.228 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.487 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.747 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.316 15:34:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:09.316 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:09.575 malloc_lvol_verify 00:23:09.575 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:09.834 d1d6b8b5-f4ba-4e48-a67f-6998cb55597f 00:23:10.092 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:10.092 459dd432-5ae6-48d7-82f3-718b94d9cfc1 00:23:10.092 15:34:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:10.351 /dev/nbd0 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:10.351 mke2fs 1.47.0 (5-Feb-2023) 00:23:10.351 Discarding device blocks: 0/4096 done 00:23:10.351 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:10.351 00:23:10.351 Allocating group tables: 0/1 done 00:23:10.351 Writing inode tables: 0/1 done 00:23:10.351 Creating journal (1024 blocks): done 00:23:10.351 Writing superblocks and filesystem accounting information: 0/1 done 00:23:10.351 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.351 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74256 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74256 ']' 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74256 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74256 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.611 killing process with pid 74256 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74256' 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74256 00:23:10.611 15:34:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74256 00:23:11.992 15:34:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:11.992 00:23:11.992 real 0m11.328s 00:23:11.992 user 0m14.850s 00:23:11.992 sys 0m4.635s 00:23:11.992 ************************************ 00:23:11.992 END TEST bdev_nbd 00:23:11.992 ************************************ 00:23:11.992 15:34:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.992 15:34:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:11.992 15:34:57 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:11.992 15:34:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:23:11.992 15:34:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:23:11.992 15:34:57 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:23:11.992 15:34:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.992 15:34:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.992 15:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:11.992 ************************************ 00:23:11.992 START TEST bdev_fio 00:23:11.992 ************************************ 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:11.992 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:11.992 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:11.993 ************************************ 00:23:11.993 START TEST bdev_fio_rw_verify 00:23:11.993 ************************************ 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:11.993 15:34:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:12.253 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.253 fio-3.35 00:23:12.253 Starting 6 threads 00:23:24.490 00:23:24.490 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74665: Wed Nov 20 15:35:08 2024 00:23:24.490 read: IOPS=31.3k, BW=122MiB/s (128MB/s)(1221MiB/10001msec) 00:23:24.490 slat (usec): min=2, max=292, avg= 6.44, stdev= 3.71 00:23:24.490 clat (usec): min=123, max=4778, avg=608.79, stdev=194.26 00:23:24.490 lat (usec): min=129, max=4781, avg=615.24, stdev=195.06 00:23:24.490 clat percentiles (usec): 00:23:24.490 | 50.000th=[ 644], 99.000th=[ 1074], 99.900th=[ 1516], 99.990th=[ 3818], 00:23:24.490 | 99.999th=[ 4424] 00:23:24.490 write: IOPS=31.6k, BW=123MiB/s (129MB/s)(1234MiB/10001msec); 0 zone resets 00:23:24.490 slat (usec): min=7, max=1503, avg=22.38, stdev=23.82 00:23:24.490 clat (usec): min=94, max=9497, avg=689.13, stdev=225.40 00:23:24.490 lat (usec): min=112, max=9530, avg=711.51, stdev=227.67 00:23:24.490 clat percentiles (usec): 00:23:24.490 | 50.000th=[ 701], 99.000th=[ 1319], 99.900th=[ 2474], 99.990th=[ 4948], 00:23:24.490 | 99.999th=[ 7373] 00:23:24.490 bw ( KiB/s): min=99752, max=146401, per=100.00%, avg=126679.63, stdev=2418.89, samples=114 00:23:24.490 iops : min=24938, max=36600, avg=31669.63, stdev=604.73, samples=114 00:23:24.490 lat (usec) : 100=0.01%, 250=2.88%, 500=18.29%, 750=52.34%, 1000=23.19% 00:23:24.490 lat (msec) : 2=3.20%, 4=0.09%, 10=0.01% 00:23:24.490 cpu : usr=61.23%, sys=26.11%, ctx=7326, majf=0, minf=26248 00:23:24.490 IO depths : 1=11.9%, 2=24.3%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.490 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.490 issued rwts: total=312613,315960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.490 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.490 00:23:24.490 Run status group 0 (all jobs): 00:23:24.491 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=1221MiB (1280MB), run=10001-10001msec 00:23:24.491 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=1234MiB (1294MB), run=10001-10001msec 00:23:24.491 ----------------------------------------------------- 00:23:24.491 Suppressions used: 00:23:24.491 count bytes template 00:23:24.491 6 48 /usr/src/fio/parse.c 00:23:24.491 3138 301248 /usr/src/fio/iolog.c 00:23:24.491 1 8 libtcmalloc_minimal.so 00:23:24.491 1 904 libcrypto.so 00:23:24.491 ----------------------------------------------------- 00:23:24.491 00:23:24.491 00:23:24.491 real 0m12.542s 00:23:24.491 user 0m38.790s 00:23:24.491 sys 0m16.059s 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:24.491 ************************************ 00:23:24.491 END TEST bdev_fio_rw_verify 00:23:24.491 ************************************ 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:24.491 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dceb5899-a17f-4f8e-a3d7-f28be5079734"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dceb5899-a17f-4f8e-a3d7-f28be5079734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "8c58511f-1326-4d63-b8c7-3ab3737dbdf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c58511f-1326-4d63-b8c7-3ab3737dbdf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bf5e8627-fc6d-4beb-85b9-33536b98a949"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf5e8627-fc6d-4beb-85b9-33536b98a949",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3244ba25-7ae5-4b6b-a32d-00f399a07b19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3244ba25-7ae5-4b6b-a32d-00f399a07b19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "43af9031-3533-4d23-acf7-4de5aa66f3c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "43af9031-3533-4d23-acf7-4de5aa66f3c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b119da6a-bc65-453e-9d2f-40f3adebcfc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b119da6a-bc65-453e-9d2f-40f3adebcfc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:24.750 /home/vagrant/spdk_repo/spdk 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:24.750 00:23:24.750 real 0m12.743s 00:23:24.750 user 0m38.900s 00:23:24.750 sys 0m16.155s 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.750 15:35:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:24.750 ************************************ 00:23:24.750 END TEST bdev_fio 00:23:24.750 ************************************ 00:23:24.750 15:35:10 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:24.750 15:35:10 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:24.750 15:35:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:24.750 15:35:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.750 15:35:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:24.750 ************************************ 00:23:24.750 START TEST bdev_verify 00:23:24.750 ************************************ 00:23:24.750 15:35:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:24.750 [2024-11-20 15:35:10.603364] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:24.750 [2024-11-20 15:35:10.603502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74837 ] 00:23:25.009 [2024-11-20 15:35:10.773351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:25.009 [2024-11-20 15:35:10.890866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.009 [2024-11-20 15:35:10.890865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.577 Running I/O for 5 seconds... 00:23:27.892 24640.00 IOPS, 96.25 MiB/s [2024-11-20T15:35:14.786Z] 24368.00 IOPS, 95.19 MiB/s [2024-11-20T15:35:15.722Z] 24608.00 IOPS, 96.12 MiB/s [2024-11-20T15:35:16.661Z] 23720.00 IOPS, 92.66 MiB/s [2024-11-20T15:35:16.661Z] 24121.60 IOPS, 94.22 MiB/s 00:23:30.703 Latency(us) 00:23:30.703 [2024-11-20T15:35:16.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.703 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0x80000 00:23:30.703 nvme0n1 : 5.06 1847.08 7.22 0.00 0.00 69174.19 5773.41 77894.22 00:23:30.703 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x80000 length 0x80000 00:23:30.703 nvme0n1 : 5.07 1844.49 7.21 0.00 0.00 69274.58 5742.20 78393.54 00:23:30.703 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0x80000 00:23:30.703 nvme0n2 : 5.09 1837.25 7.18 0.00 0.00 69433.30 9112.62 78892.86 00:23:30.703 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x80000 length 0x80000 00:23:30.703 nvme0n2 : 5.08 1840.71 7.19 0.00 0.00 69298.56 7552.24 85883.37 00:23:30.703 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0x80000 00:23:30.703 nvme0n3 : 5.03 1832.52 7.16 0.00 0.00 69496.23 11297.16 73400.32 00:23:30.703 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x80000 length 0x80000 00:23:30.703 nvme0n3 : 5.04 1827.26 7.14 0.00 0.00 69701.12 8301.23 87381.33 00:23:30.703 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0x20000 00:23:30.703 nvme1n1 : 5.09 1834.29 7.17 0.00 0.00 69328.58 15791.06 74398.96 00:23:30.703 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x20000 length 0x20000 00:23:30.703 nvme1n1 : 5.05 1826.44 7.13 0.00 0.00 69626.58 13294.45 80390.83 00:23:30.703 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0xa0000 00:23:30.703 nvme2n1 : 5.10 1832.99 7.16 0.00 0.00 69276.66 8925.38 86882.01 00:23:30.703 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0xa0000 length 0xa0000 00:23:30.703 nvme2n1 : 5.09 1835.77 7.17 0.00 0.00 69163.15 6397.56 85384.05 00:23:30.703 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0x0 length 0xbd0bd 00:23:30.703 nvme3n1 : 5.08 2833.98 11.07 0.00 0.00 44712.22 2886.70 79891.50 00:23:30.703 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.703 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:30.703 nvme3n1 : 5.09 2683.25 10.48 0.00 0.00 47147.48 3916.56 55424.73 00:23:30.703 [2024-11-20T15:35:16.661Z] =================================================================================================================== 00:23:30.704 [2024-11-20T15:35:16.662Z] Total : 23876.02 93.27 0.00 0.00 63936.29 2886.70 87381.33 00:23:32.094 00:23:32.094 real 0m7.280s 00:23:32.094 user 0m11.561s 00:23:32.094 sys 0m1.880s 00:23:32.094 15:35:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.094 15:35:17 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:32.094 ************************************ 00:23:32.094 END TEST bdev_verify 00:23:32.094 ************************************ 00:23:32.094 15:35:17 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:32.094 15:35:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:32.094 15:35:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.094 15:35:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.094 ************************************ 00:23:32.094 START TEST bdev_verify_big_io 00:23:32.094 ************************************ 00:23:32.094 15:35:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:32.094 [2024-11-20 15:35:17.979914] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:32.094 [2024-11-20 15:35:17.980145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74937 ] 00:23:32.351 [2024-11-20 15:35:18.172212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.351 [2024-11-20 15:35:18.289899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.351 [2024-11-20 15:35:18.289918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.284 Running I/O for 5 seconds... 00:23:39.124 1184.00 IOPS, 74.00 MiB/s [2024-11-20T15:35:25.082Z] 2744.00 IOPS, 171.50 MiB/s 00:23:39.124 Latency(us) 00:23:39.124 [2024-11-20T15:35:25.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.124 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.124 Verification LBA range: start 0x0 length 0x8000 00:23:39.124 nvme0n1 : 5.79 107.73 6.73 0.00 0.00 1146586.33 34453.21 1158426.82 00:23:39.124 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.124 Verification LBA range: start 0x8000 length 0x8000 00:23:39.124 nvme0n1 : 5.78 99.67 6.23 0.00 0.00 1262894.97 25964.74 2444680.05 00:23:39.124 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.124 Verification LBA range: start 0x0 length 0x8000 00:23:39.124 nvme0n2 : 5.84 106.93 6.68 0.00 0.00 1102435.75 40944.40 1493971.14 00:23:39.124 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.124 Verification LBA range: start 0x8000 length 0x8000 00:23:39.124 nvme0n2 : 5.78 85.80 5.36 0.00 0.00 1425821.25 25465.42 3467291.31 00:23:39.124 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x0 length 0x8000 00:23:39.125 nvme0n3 : 5.85 128.57 8.04 0.00 0.00 913338.07 11234.74 1230329.17 00:23:39.125 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x8000 length 0x8000 00:23:39.125 nvme0n3 : 5.78 130.04 8.13 0.00 0.00 913689.03 12545.46 1198372.57 00:23:39.125 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x0 length 0x2000 00:23:39.125 nvme1n1 : 5.84 128.70 8.04 0.00 0.00 887716.99 40694.74 1206361.72 00:23:39.125 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x2000 length 0x2000 00:23:39.125 nvme1n1 : 5.79 132.74 8.30 0.00 0.00 872508.55 11546.82 1198372.57 00:23:39.125 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x0 length 0xa000 00:23:39.125 nvme2n1 : 5.85 128.51 8.03 0.00 0.00 868267.92 4712.35 1350166.43 00:23:39.125 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0xa000 length 0xa000 00:23:39.125 nvme2n1 : 5.79 118.76 7.42 0.00 0.00 946160.12 13356.86 1190383.42 00:23:39.125 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0x0 length 0xbd0b 00:23:39.125 nvme3n1 : 5.85 139.39 8.71 0.00 0.00 777512.85 6085.49 1126470.22 00:23:39.125 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:39.125 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:39.125 nvme3n1 : 5.80 162.87 10.18 0.00 0.00 669413.63 6584.81 1002638.38 00:23:39.125 [2024-11-20T15:35:25.083Z] =================================================================================================================== 00:23:39.125 [2024-11-20T15:35:25.083Z] Total : 1469.71 91.86 0.00 0.00 950359.47 4712.35 3467291.31 00:23:40.500 00:23:40.500 real 0m8.349s 00:23:40.500 user 0m15.206s 00:23:40.500 sys 0m0.534s 00:23:40.500 15:35:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.500 15:35:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:40.500 ************************************ 00:23:40.500 END TEST bdev_verify_big_io 00:23:40.500 ************************************ 00:23:40.500 15:35:26 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:40.500 15:35:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:40.500 15:35:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.500 15:35:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:40.500 ************************************ 00:23:40.500 START TEST bdev_write_zeroes 00:23:40.500 ************************************ 00:23:40.500 15:35:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:40.500 [2024-11-20 15:35:26.356976] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:40.500 [2024-11-20 15:35:26.357105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:23:40.758 [2024-11-20 15:35:26.528156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.758 [2024-11-20 15:35:26.639691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.324 Running I/O for 1 seconds... 00:23:42.258 80256.00 IOPS, 313.50 MiB/s 00:23:42.258 Latency(us) 00:23:42.258 [2024-11-20T15:35:28.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.258 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme0n1 : 1.02 12682.64 49.54 0.00 0.00 10082.32 5929.45 22968.81 00:23:42.258 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme0n2 : 1.02 12662.26 49.46 0.00 0.00 10092.16 6210.32 23468.13 00:23:42.258 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme0n3 : 1.02 12643.74 49.39 0.00 0.00 10100.08 6303.94 23842.62 00:23:42.258 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme1n1 : 1.02 12625.54 49.32 0.00 0.00 10108.32 6210.32 24341.94 00:23:42.258 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme2n1 : 1.03 12607.05 49.25 0.00 0.00 10116.78 6147.90 24716.43 00:23:42.258 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.258 nvme3n1 : 1.03 15957.78 62.34 0.00 0.00 7971.76 2168.93 24217.11 00:23:42.258 [2024-11-20T15:35:28.216Z] =================================================================================================================== 00:23:42.258 [2024-11-20T15:35:28.216Z] Total : 79179.01 309.29 0.00 0.00 9668.25 2168.93 24716.43 00:23:43.636 00:23:43.636 real 0m3.055s 00:23:43.636 user 0m2.244s 00:23:43.636 sys 0m0.643s 00:23:43.636 15:35:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.636 15:35:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:43.636 ************************************ 00:23:43.636 END TEST bdev_write_zeroes 00:23:43.636 ************************************ 00:23:43.636 15:35:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:43.636 15:35:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:43.636 15:35:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.636 15:35:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:43.636 ************************************ 00:23:43.636 START TEST bdev_json_nonenclosed 00:23:43.636 ************************************ 00:23:43.636 15:35:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:43.636 [2024-11-20 15:35:29.471697] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:43.636 [2024-11-20 15:35:29.471829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75114 ] 00:23:43.894 [2024-11-20 15:35:29.639092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.894 [2024-11-20 15:35:29.744244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.894 [2024-11-20 15:35:29.744344] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:43.894 [2024-11-20 15:35:29.744366] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:43.894 [2024-11-20 15:35:29.744378] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:44.152 00:23:44.152 real 0m0.608s 00:23:44.152 user 0m0.382s 00:23:44.152 sys 0m0.122s 00:23:44.152 15:35:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.152 15:35:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:44.152 ************************************ 00:23:44.152 END TEST bdev_json_nonenclosed 00:23:44.152 ************************************ 00:23:44.152 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.152 15:35:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:44.152 15:35:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.152 15:35:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:44.152 ************************************ 00:23:44.152 START TEST bdev_json_nonarray 00:23:44.152 ************************************ 00:23:44.152 15:35:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.410 [2024-11-20 15:35:30.138855] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:44.410 [2024-11-20 15:35:30.138989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:23:44.410 [2024-11-20 15:35:30.305546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.668 [2024-11-20 15:35:30.415106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.668 [2024-11-20 15:35:30.415221] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:44.668 [2024-11-20 15:35:30.415244] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:44.668 [2024-11-20 15:35:30.415256] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:44.927 00:23:44.927 real 0m0.607s 00:23:44.927 user 0m0.377s 00:23:44.927 sys 0m0.126s 00:23:44.927 15:35:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.927 15:35:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:44.927 ************************************ 00:23:44.927 END TEST bdev_json_nonarray 00:23:44.927 ************************************ 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:44.927 15:35:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:45.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:49.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.715 00:23:49.715 real 0m58.910s 00:23:49.715 user 1m38.662s 00:23:49.715 sys 0m35.718s 00:23:49.715 15:35:35 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.715 ************************************ 00:23:49.715 15:35:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:49.715 END TEST blockdev_xnvme 00:23:49.715 ************************************ 00:23:49.715 15:35:35 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:49.715 15:35:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:49.715 15:35:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.715 15:35:35 -- common/autotest_common.sh@10 -- # set +x 00:23:49.715 ************************************ 00:23:49.715 START TEST ublk 00:23:49.715 ************************************ 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:49.715 * Looking for test storage... 00:23:49.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.715 15:35:35 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.715 15:35:35 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.715 15:35:35 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.715 15:35:35 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.715 15:35:35 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.715 15:35:35 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:49.715 15:35:35 ublk -- scripts/common.sh@345 -- # : 1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.715 15:35:35 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.715 15:35:35 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@353 -- # local d=1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.715 15:35:35 ublk -- scripts/common.sh@355 -- # echo 1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.715 15:35:35 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@353 -- # local d=2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.715 15:35:35 ublk -- scripts/common.sh@355 -- # echo 2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.715 15:35:35 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.715 15:35:35 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.715 15:35:35 ublk -- scripts/common.sh@368 -- # return 0 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:49.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.715 --rc genhtml_branch_coverage=1 00:23:49.715 --rc genhtml_function_coverage=1 00:23:49.715 --rc genhtml_legend=1 00:23:49.715 --rc geninfo_all_blocks=1 00:23:49.715 --rc geninfo_unexecuted_blocks=1 00:23:49.715 00:23:49.715 ' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:49.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.715 --rc genhtml_branch_coverage=1 00:23:49.715 --rc genhtml_function_coverage=1 00:23:49.715 --rc genhtml_legend=1 00:23:49.715 --rc geninfo_all_blocks=1 00:23:49.715 --rc geninfo_unexecuted_blocks=1 00:23:49.715 00:23:49.715 ' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:49.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.715 --rc genhtml_branch_coverage=1 00:23:49.715 --rc genhtml_function_coverage=1 00:23:49.715 --rc genhtml_legend=1 00:23:49.715 --rc geninfo_all_blocks=1 00:23:49.715 --rc geninfo_unexecuted_blocks=1 00:23:49.715 00:23:49.715 ' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:49.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.715 --rc genhtml_branch_coverage=1 00:23:49.715 --rc genhtml_function_coverage=1 00:23:49.715 --rc genhtml_legend=1 00:23:49.715 --rc geninfo_all_blocks=1 00:23:49.715 --rc geninfo_unexecuted_blocks=1 00:23:49.715 00:23:49.715 ' 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:49.715 15:35:35 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:49.715 15:35:35 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:49.715 15:35:35 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:49.715 15:35:35 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:49.715 15:35:35 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:49.715 15:35:35 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:49.715 15:35:35 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:49.715 15:35:35 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:49.715 15:35:35 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.715 15:35:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:49.715 ************************************ 00:23:49.715 START TEST test_save_ublk_config 00:23:49.715 ************************************ 00:23:49.715 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:49.715 15:35:35 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:49.715 15:35:35 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75441 00:23:49.715 15:35:35 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75441 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75441 ']' 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.716 15:35:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:49.716 [2024-11-20 15:35:35.498759] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:49.716 [2024-11-20 15:35:35.498914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75441 ] 00:23:49.716 [2024-11-20 15:35:35.669901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.974 [2024-11-20 15:35:35.778174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:50.907 [2024-11-20 15:35:36.645615] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:50.907 [2024-11-20 15:35:36.646773] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:50.907 malloc0 00:23:50.907 [2024-11-20 15:35:36.733725] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:50.907 [2024-11-20 15:35:36.733846] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:50.907 [2024-11-20 15:35:36.733861] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:50.907 [2024-11-20 15:35:36.733871] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:50.907 [2024-11-20 15:35:36.742743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:50.907 [2024-11-20 15:35:36.742774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:50.907 [2024-11-20 15:35:36.749608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:50.907 [2024-11-20 15:35:36.749725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:50.907 [2024-11-20 15:35:36.766602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:50.907 0 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.907 15:35:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:51.165 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.165 15:35:37 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:23:51.165 "subsystems": [ 00:23:51.165 { 00:23:51.165 "subsystem": "fsdev", 00:23:51.165 "config": [ 00:23:51.165 { 00:23:51.165 "method": "fsdev_set_opts", 00:23:51.165 "params": { 00:23:51.165 "fsdev_io_pool_size": 65535, 00:23:51.165 "fsdev_io_cache_size": 256 00:23:51.165 } 00:23:51.165 } 00:23:51.165 ] 00:23:51.165 }, 00:23:51.165 { 00:23:51.165 "subsystem": "keyring", 00:23:51.165 "config": [] 00:23:51.165 }, 00:23:51.165 { 00:23:51.165 "subsystem": "iobuf", 00:23:51.165 "config": [ 00:23:51.165 { 00:23:51.165 "method": "iobuf_set_options", 00:23:51.165 "params": { 00:23:51.165 "small_pool_count": 8192, 00:23:51.165 "large_pool_count": 1024, 00:23:51.165 "small_bufsize": 8192, 00:23:51.165 "large_bufsize": 135168, 00:23:51.165 "enable_numa": false 00:23:51.165 } 00:23:51.165 } 00:23:51.165 ] 00:23:51.165 }, 00:23:51.165 { 00:23:51.165 "subsystem": "sock", 00:23:51.165 "config": [ 00:23:51.165 { 00:23:51.165 "method": "sock_set_default_impl", 00:23:51.165 "params": { 00:23:51.165 "impl_name": "posix" 00:23:51.165 } 00:23:51.165 }, 00:23:51.165 { 00:23:51.165 "method": "sock_impl_set_options", 00:23:51.165 "params": { 00:23:51.165 "impl_name": "ssl", 00:23:51.165 "recv_buf_size": 4096, 00:23:51.165 "send_buf_size": 4096, 00:23:51.165 "enable_recv_pipe": true, 00:23:51.165 "enable_quickack": false, 00:23:51.165 "enable_placement_id": 0, 00:23:51.165 "enable_zerocopy_send_server": true, 00:23:51.165 "enable_zerocopy_send_client": false, 00:23:51.165 "zerocopy_threshold": 0, 00:23:51.165 "tls_version": 0, 00:23:51.165 "enable_ktls": false 00:23:51.165 } 00:23:51.165 }, 00:23:51.165 { 00:23:51.165 "method": "sock_impl_set_options", 00:23:51.165 "params": { 00:23:51.165 "impl_name": "posix", 00:23:51.165 "recv_buf_size": 2097152, 00:23:51.165 "send_buf_size": 2097152, 00:23:51.165 "enable_recv_pipe": true, 00:23:51.165 "enable_quickack": false, 00:23:51.165 "enable_placement_id": 0, 00:23:51.165 "enable_zerocopy_send_server": true, 00:23:51.165 "enable_zerocopy_send_client": false, 00:23:51.165 "zerocopy_threshold": 0, 00:23:51.165 "tls_version": 0, 00:23:51.165 "enable_ktls": false 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "vmd", 00:23:51.166 "config": [] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "accel", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "accel_set_options", 00:23:51.166 "params": { 00:23:51.166 "small_cache_size": 128, 00:23:51.166 "large_cache_size": 16, 00:23:51.166 "task_count": 2048, 00:23:51.166 "sequence_count": 2048, 00:23:51.166 "buf_count": 2048 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "bdev", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "bdev_set_options", 00:23:51.166 "params": { 00:23:51.166 "bdev_io_pool_size": 65535, 00:23:51.166 "bdev_io_cache_size": 256, 00:23:51.166 "bdev_auto_examine": true, 00:23:51.166 "iobuf_small_cache_size": 128, 00:23:51.166 "iobuf_large_cache_size": 16 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_raid_set_options", 00:23:51.166 "params": { 00:23:51.166 "process_window_size_kb": 1024, 00:23:51.166 "process_max_bandwidth_mb_sec": 0 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_iscsi_set_options", 00:23:51.166 "params": { 00:23:51.166 "timeout_sec": 30 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_nvme_set_options", 00:23:51.166 "params": { 00:23:51.166 "action_on_timeout": "none", 00:23:51.166 "timeout_us": 0, 00:23:51.166 "timeout_admin_us": 0, 00:23:51.166 "keep_alive_timeout_ms": 10000, 00:23:51.166 "arbitration_burst": 0, 00:23:51.166 "low_priority_weight": 0, 00:23:51.166 "medium_priority_weight": 0, 00:23:51.166 "high_priority_weight": 0, 00:23:51.166 "nvme_adminq_poll_period_us": 10000, 00:23:51.166 "nvme_ioq_poll_period_us": 0, 00:23:51.166 "io_queue_requests": 0, 00:23:51.166 "delay_cmd_submit": true, 00:23:51.166 "transport_retry_count": 4, 00:23:51.166 "bdev_retry_count": 3, 00:23:51.166 "transport_ack_timeout": 0, 00:23:51.166 "ctrlr_loss_timeout_sec": 0, 00:23:51.166 "reconnect_delay_sec": 0, 00:23:51.166 "fast_io_fail_timeout_sec": 0, 00:23:51.166 "disable_auto_failback": false, 00:23:51.166 "generate_uuids": false, 00:23:51.166 "transport_tos": 0, 00:23:51.166 "nvme_error_stat": false, 00:23:51.166 "rdma_srq_size": 0, 00:23:51.166 "io_path_stat": false, 00:23:51.166 "allow_accel_sequence": false, 00:23:51.166 "rdma_max_cq_size": 0, 00:23:51.166 "rdma_cm_event_timeout_ms": 0, 00:23:51.166 "dhchap_digests": [ 00:23:51.166 "sha256", 00:23:51.166 "sha384", 00:23:51.166 "sha512" 00:23:51.166 ], 00:23:51.166 "dhchap_dhgroups": [ 00:23:51.166 "null", 00:23:51.166 "ffdhe2048", 00:23:51.166 "ffdhe3072", 00:23:51.166 "ffdhe4096", 00:23:51.166 "ffdhe6144", 00:23:51.166 "ffdhe8192" 00:23:51.166 ] 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_nvme_set_hotplug", 00:23:51.166 "params": { 00:23:51.166 "period_us": 100000, 00:23:51.166 "enable": false 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_malloc_create", 00:23:51.166 "params": { 00:23:51.166 "name": "malloc0", 00:23:51.166 "num_blocks": 8192, 00:23:51.166 "block_size": 4096, 00:23:51.166 "physical_block_size": 4096, 00:23:51.166 "uuid": "6c2016f6-a84c-47ab-88f2-97db64d71c5f", 00:23:51.166 "optimal_io_boundary": 0, 00:23:51.166 "md_size": 0, 00:23:51.166 "dif_type": 0, 00:23:51.166 "dif_is_head_of_md": false, 00:23:51.166 "dif_pi_format": 0 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "bdev_wait_for_examine" 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "scsi", 00:23:51.166 "config": null 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "scheduler", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "framework_set_scheduler", 00:23:51.166 "params": { 00:23:51.166 "name": "static" 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "vhost_scsi", 00:23:51.166 "config": [] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "vhost_blk", 00:23:51.166 "config": [] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "ublk", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "ublk_create_target", 00:23:51.166 "params": { 00:23:51.166 "cpumask": "1" 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "ublk_start_disk", 00:23:51.166 "params": { 00:23:51.166 "bdev_name": "malloc0", 00:23:51.166 "ublk_id": 0, 00:23:51.166 "num_queues": 1, 00:23:51.166 "queue_depth": 128 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "nbd", 00:23:51.166 "config": [] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "nvmf", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "nvmf_set_config", 00:23:51.166 "params": { 00:23:51.166 "discovery_filter": "match_any", 00:23:51.166 "admin_cmd_passthru": { 00:23:51.166 "identify_ctrlr": false 00:23:51.166 }, 00:23:51.166 "dhchap_digests": [ 00:23:51.166 "sha256", 00:23:51.166 "sha384", 00:23:51.166 "sha512" 00:23:51.166 ], 00:23:51.166 "dhchap_dhgroups": [ 00:23:51.166 "null", 00:23:51.166 "ffdhe2048", 00:23:51.166 "ffdhe3072", 00:23:51.166 "ffdhe4096", 00:23:51.166 "ffdhe6144", 00:23:51.166 "ffdhe8192" 00:23:51.166 ] 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "nvmf_set_max_subsystems", 00:23:51.166 "params": { 00:23:51.166 "max_subsystems": 1024 00:23:51.166 } 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "method": "nvmf_set_crdt", 00:23:51.166 "params": { 00:23:51.166 "crdt1": 0, 00:23:51.166 "crdt2": 0, 00:23:51.166 "crdt3": 0 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }, 00:23:51.166 { 00:23:51.166 "subsystem": "iscsi", 00:23:51.166 "config": [ 00:23:51.166 { 00:23:51.166 "method": "iscsi_set_options", 00:23:51.166 "params": { 00:23:51.166 "node_base": "iqn.2016-06.io.spdk", 00:23:51.166 "max_sessions": 128, 00:23:51.166 "max_connections_per_session": 2, 00:23:51.166 "max_queue_depth": 64, 00:23:51.166 "default_time2wait": 2, 00:23:51.166 "default_time2retain": 20, 00:23:51.166 "first_burst_length": 8192, 00:23:51.166 "immediate_data": true, 00:23:51.166 "allow_duplicated_isid": false, 00:23:51.166 "error_recovery_level": 0, 00:23:51.166 "nop_timeout": 60, 00:23:51.166 "nop_in_interval": 30, 00:23:51.166 "disable_chap": false, 00:23:51.166 "require_chap": false, 00:23:51.166 "mutual_chap": false, 00:23:51.166 "chap_group": 0, 00:23:51.166 "max_large_datain_per_connection": 64, 00:23:51.166 "max_r2t_per_connection": 4, 00:23:51.166 "pdu_pool_size": 36864, 00:23:51.166 "immediate_data_pool_size": 16384, 00:23:51.166 "data_out_pool_size": 2048 00:23:51.166 } 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 } 00:23:51.166 ] 00:23:51.166 }' 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75441 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75441 ']' 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75441 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.166 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75441 00:23:51.424 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.424 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.424 killing process with pid 75441 00:23:51.424 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75441' 00:23:51.424 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75441 00:23:51.424 15:35:37 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75441 00:23:52.807 [2024-11-20 15:35:38.533827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:52.807 [2024-11-20 15:35:38.565683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:52.807 [2024-11-20 15:35:38.565842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:52.807 [2024-11-20 15:35:38.571610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:52.807 [2024-11-20 15:35:38.571665] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:52.807 [2024-11-20 15:35:38.571684] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:52.807 [2024-11-20 15:35:38.571715] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:52.808 [2024-11-20 15:35:38.571865] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75507 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75507 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75507 ']' 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:23:54.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 15:35:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:23:54.733 "subsystems": [ 00:23:54.733 { 00:23:54.733 "subsystem": "fsdev", 00:23:54.733 "config": [ 00:23:54.733 { 00:23:54.733 "method": "fsdev_set_opts", 00:23:54.733 "params": { 00:23:54.733 "fsdev_io_pool_size": 65535, 00:23:54.733 "fsdev_io_cache_size": 256 00:23:54.733 } 00:23:54.733 } 00:23:54.733 ] 00:23:54.733 }, 00:23:54.733 { 00:23:54.734 "subsystem": "keyring", 00:23:54.734 "config": [] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "iobuf", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "iobuf_set_options", 00:23:54.734 "params": { 00:23:54.734 "small_pool_count": 8192, 00:23:54.734 "large_pool_count": 1024, 00:23:54.734 "small_bufsize": 8192, 00:23:54.734 "large_bufsize": 135168, 00:23:54.734 "enable_numa": false 00:23:54.734 } 00:23:54.734 } 00:23:54.734 ] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "sock", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "sock_set_default_impl", 00:23:54.734 "params": { 00:23:54.734 "impl_name": "posix" 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "sock_impl_set_options", 00:23:54.734 "params": { 00:23:54.734 "impl_name": "ssl", 00:23:54.734 "recv_buf_size": 4096, 00:23:54.734 "send_buf_size": 4096, 00:23:54.734 "enable_recv_pipe": true, 00:23:54.734 "enable_quickack": false, 00:23:54.734 "enable_placement_id": 0, 00:23:54.734 "enable_zerocopy_send_server": true, 00:23:54.734 "enable_zerocopy_send_client": false, 00:23:54.734 "zerocopy_threshold": 0, 00:23:54.734 "tls_version": 0, 00:23:54.734 "enable_ktls": false 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "sock_impl_set_options", 00:23:54.734 "params": { 00:23:54.734 "impl_name": "posix", 00:23:54.734 "recv_buf_size": 2097152, 00:23:54.734 "send_buf_size": 2097152, 00:23:54.734 "enable_recv_pipe": true, 00:23:54.734 "enable_quickack": false, 00:23:54.734 "enable_placement_id": 0, 00:23:54.734 "enable_zerocopy_send_server": true, 00:23:54.734 "enable_zerocopy_send_client": false, 00:23:54.734 "zerocopy_threshold": 0, 00:23:54.734 "tls_version": 0, 00:23:54.734 "enable_ktls": false 00:23:54.734 } 00:23:54.734 } 00:23:54.734 ] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "vmd", 00:23:54.734 "config": [] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "accel", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "accel_set_options", 00:23:54.734 "params": { 00:23:54.734 "small_cache_size": 128, 00:23:54.734 "large_cache_size": 16, 00:23:54.734 "task_count": 2048, 00:23:54.734 "sequence_count": 2048, 00:23:54.734 "buf_count": 2048 00:23:54.734 } 00:23:54.734 } 00:23:54.734 ] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "bdev", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "bdev_set_options", 00:23:54.734 "params": { 00:23:54.734 "bdev_io_pool_size": 65535, 00:23:54.734 "bdev_io_cache_size": 256, 00:23:54.734 "bdev_auto_examine": true, 00:23:54.734 "iobuf_small_cache_size": 128, 00:23:54.734 "iobuf_large_cache_size": 16 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_raid_set_options", 00:23:54.734 "params": { 00:23:54.734 "process_window_size_kb": 1024, 00:23:54.734 "process_max_bandwidth_mb_sec": 0 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_iscsi_set_options", 00:23:54.734 "params": { 00:23:54.734 "timeout_sec": 30 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_nvme_set_options", 00:23:54.734 "params": { 00:23:54.734 "action_on_timeout": "none", 00:23:54.734 "timeout_us": 0, 00:23:54.734 "timeout_admin_us": 0, 00:23:54.734 "keep_alive_timeout_ms": 10000, 00:23:54.734 "arbitration_burst": 0, 00:23:54.734 "low_priority_weight": 0, 00:23:54.734 "medium_priority_weight": 0, 00:23:54.734 "high_priority_weight": 0, 00:23:54.734 "nvme_adminq_poll_period_us": 10000, 00:23:54.734 "nvme_ioq_poll_period_us": 0, 00:23:54.734 "io_queue_requests": 0, 00:23:54.734 "delay_cmd_submit": true, 00:23:54.734 "transport_retry_count": 4, 00:23:54.734 "bdev_retry_count": 3, 00:23:54.734 "transport_ack_timeout": 0, 00:23:54.734 "ctrlr_loss_timeout_sec": 0, 00:23:54.734 "reconnect_delay_sec": 0, 00:23:54.734 "fast_io_fail_timeout_sec": 0, 00:23:54.734 "disable_auto_failback": false, 00:23:54.734 "generate_uuids": false, 00:23:54.734 "transport_tos": 0, 00:23:54.734 "nvme_error_stat": false, 00:23:54.734 "rdma_srq_size": 0, 00:23:54.734 "io_path_stat": false, 00:23:54.734 "allow_accel_sequence": false, 00:23:54.734 "rdma_max_cq_size": 0, 00:23:54.734 "rdma_cm_event_timeout_ms": 0, 00:23:54.734 "dhchap_digests": [ 00:23:54.734 "sha256", 00:23:54.734 "sha384", 00:23:54.734 "sha512" 00:23:54.734 ], 00:23:54.734 "dhchap_dhgroups": [ 00:23:54.734 "null", 00:23:54.734 "ffdhe2048", 00:23:54.734 "ffdhe3072", 00:23:54.734 "ffdhe4096", 00:23:54.734 "ffdhe6144", 00:23:54.734 "ffdhe8192" 00:23:54.734 ] 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_nvme_set_hotplug", 00:23:54.734 "params": { 00:23:54.734 "period_us": 100000, 00:23:54.734 "enable": false 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_malloc_create", 00:23:54.734 "params": { 00:23:54.734 "name": "malloc0", 00:23:54.734 "num_blocks": 8192, 00:23:54.734 "block_size": 4096, 00:23:54.734 "physical_block_size": 4096, 00:23:54.734 "uuid": "6c2016f6-a84c-47ab-88f2-97db64d71c5f", 00:23:54.734 "optimal_io_boundary": 0, 00:23:54.734 "md_size": 0, 00:23:54.734 "dif_type": 0, 00:23:54.734 "dif_is_head_of_md": false, 00:23:54.734 "dif_pi_format": 0 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "method": "bdev_wait_for_examine" 00:23:54.734 } 00:23:54.734 ] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "scsi", 00:23:54.734 "config": null 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "scheduler", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "framework_set_scheduler", 00:23:54.734 "params": { 00:23:54.734 "name": "static" 00:23:54.734 } 00:23:54.734 } 00:23:54.734 ] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "vhost_scsi", 00:23:54.734 "config": [] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "vhost_blk", 00:23:54.734 "config": [] 00:23:54.734 }, 00:23:54.734 { 00:23:54.734 "subsystem": "ublk", 00:23:54.734 "config": [ 00:23:54.734 { 00:23:54.734 "method": "ublk_create_target", 00:23:54.734 "params": { 00:23:54.734 "cpumask": "1" 00:23:54.734 } 00:23:54.734 }, 00:23:54.734 { 00:23:54.735 "method": "ublk_start_disk", 00:23:54.735 "params": { 00:23:54.735 "bdev_name": "malloc0", 00:23:54.735 "ublk_id": 0, 00:23:54.735 "num_queues": 1, 00:23:54.735 "queue_depth": 128 00:23:54.735 } 00:23:54.735 } 00:23:54.735 ] 00:23:54.735 }, 00:23:54.735 { 00:23:54.735 "subsystem": "nbd", 00:23:54.735 "config": [] 00:23:54.735 }, 00:23:54.735 { 00:23:54.735 "subsystem": "nvmf", 00:23:54.735 "config": [ 00:23:54.735 { 00:23:54.735 "method": "nvmf_set_config", 00:23:54.735 "params": { 00:23:54.735 "discovery_filter": "match_any", 00:23:54.735 "admin_cmd_passthru": { 00:23:54.735 "identify_ctrlr": false 00:23:54.735 }, 00:23:54.735 "dhchap_digests": [ 00:23:54.735 "sha256", 00:23:54.735 "sha384", 00:23:54.735 "sha512" 00:23:54.735 ], 00:23:54.735 "dhchap_dhgroups": [ 00:23:54.735 "null", 00:23:54.735 "ffdhe2048", 00:23:54.735 "ffdhe3072", 00:23:54.735 "ffdhe4096", 00:23:54.735 "ffdhe6144", 00:23:54.735 "ffdhe8192" 00:23:54.735 ] 00:23:54.735 } 00:23:54.735 }, 00:23:54.735 { 00:23:54.735 "method": "nvmf_set_max_subsystems", 00:23:54.735 "params": { 00:23:54.735 "max_subsystems": 1024 00:23:54.735 } 00:23:54.735 }, 00:23:54.735 { 00:23:54.735 "method": "nvmf_set_crdt", 00:23:54.735 "params": { 00:23:54.735 "crdt1": 0, 00:23:54.735 "crdt2": 0, 00:23:54.735 "crdt3": 0 00:23:54.735 } 00:23:54.735 } 00:23:54.735 ] 00:23:54.735 }, 00:23:54.735 { 00:23:54.735 "subsystem": "iscsi", 00:23:54.735 "config": [ 00:23:54.735 { 00:23:54.735 "method": "iscsi_set_options", 00:23:54.735 "params": { 00:23:54.735 "node_base": "iqn.2016-06.io.spdk", 00:23:54.735 "max_sessions": 128, 00:23:54.735 "max_connections_per_session": 2, 00:23:54.735 "max_queue_depth": 64, 00:23:54.735 "default_time2wait": 2, 00:23:54.735 "default_time2retain": 20, 00:23:54.735 "first_burst_length": 8192, 00:23:54.735 "immediate_data": true, 00:23:54.735 "allow_duplicated_isid": false, 00:23:54.735 "error_recovery_level": 0, 00:23:54.735 "nop_timeout": 60, 00:23:54.735 "nop_in_interval": 30, 00:23:54.735 "disable_chap": false, 00:23:54.735 "require_chap": false, 00:23:54.735 "mutual_chap": false, 00:23:54.735 "chap_group": 0, 00:23:54.735 "max_large_datain_per_connection": 64, 00:23:54.735 "max_r2t_per_connection": 4, 00:23:54.735 "pdu_pool_size": 36864, 00:23:54.735 "immediate_data_pool_size": 16384, 00:23:54.735 "data_out_pool_size": 2048 00:23:54.735 } 00:23:54.735 } 00:23:54.735 ] 00:23:54.735 } 00:23:54.735 ] 00:23:54.735 }' 00:23:54.735 [2024-11-20 15:35:40.582716] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:54.735 [2024-11-20 15:35:40.583444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75507 ] 00:23:54.994 [2024-11-20 15:35:40.783852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.252 [2024-11-20 15:35:40.960415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.189 [2024-11-20 15:35:42.145587] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:56.449 [2024-11-20 15:35:42.146772] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:56.449 [2024-11-20 15:35:42.153726] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:56.449 [2024-11-20 15:35:42.153808] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:56.449 [2024-11-20 15:35:42.153822] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:56.449 [2024-11-20 15:35:42.153830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:56.449 [2024-11-20 15:35:42.162632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:56.449 [2024-11-20 15:35:42.162657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:56.449 [2024-11-20 15:35:42.170607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:56.449 [2024-11-20 15:35:42.170705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:56.449 [2024-11-20 15:35:42.187589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75507 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75507 ']' 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75507 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75507 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.449 killing process with pid 75507 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75507' 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75507 00:23:56.449 15:35:42 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75507 00:23:58.354 [2024-11-20 15:35:43.832885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:58.354 [2024-11-20 15:35:43.868707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:58.354 [2024-11-20 15:35:43.868855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:58.354 [2024-11-20 15:35:43.877612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:58.354 [2024-11-20 15:35:43.877662] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:58.354 [2024-11-20 15:35:43.877671] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:58.354 [2024-11-20 15:35:43.877698] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:58.354 [2024-11-20 15:35:43.877837] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:00.260 15:35:45 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:00.260 00:24:00.260 real 0m10.317s 00:24:00.260 user 0m8.056s 00:24:00.260 sys 0m3.172s 00:24:00.260 15:35:45 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.260 ************************************ 00:24:00.260 15:35:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:00.260 END TEST test_save_ublk_config 00:24:00.260 ************************************ 00:24:00.260 15:35:45 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75599 00:24:00.260 15:35:45 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.260 15:35:45 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:00.260 15:35:45 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75599 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@835 -- # '[' -z 75599 ']' 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.260 15:35:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:00.260 [2024-11-20 15:35:45.860856] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:00.260 [2024-11-20 15:35:45.860998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75599 ] 00:24:00.260 [2024-11-20 15:35:46.029541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:00.260 [2024-11-20 15:35:46.145522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.260 [2024-11-20 15:35:46.145555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.196 15:35:47 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.196 15:35:47 ublk -- common/autotest_common.sh@868 -- # return 0 00:24:01.196 15:35:47 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:01.196 15:35:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:01.196 15:35:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.196 15:35:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.196 ************************************ 00:24:01.196 START TEST test_create_ublk 00:24:01.196 ************************************ 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:24:01.196 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.196 [2024-11-20 15:35:47.035592] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:01.196 [2024-11-20 15:35:47.038296] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.196 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:01.196 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.196 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.456 [2024-11-20 15:35:47.340743] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:01.456 [2024-11-20 15:35:47.341190] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:01.456 [2024-11-20 15:35:47.341211] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:01.456 [2024-11-20 15:35:47.341219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:01.456 [2024-11-20 15:35:47.349906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:01.456 [2024-11-20 15:35:47.349927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:01.456 [2024-11-20 15:35:47.356606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:01.456 [2024-11-20 15:35:47.357178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:01.456 [2024-11-20 15:35:47.379644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.456 15:35:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:01.456 { 00:24:01.456 "ublk_device": "/dev/ublkb0", 00:24:01.456 "id": 0, 00:24:01.456 "queue_depth": 512, 00:24:01.456 "num_queues": 4, 00:24:01.456 "bdev_name": "Malloc0" 00:24:01.456 } 00:24:01.456 ]' 00:24:01.456 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:01.715 15:35:47 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:01.975 fio: verification read phase will never start because write phase uses all of runtime 00:24:01.975 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:01.975 fio-3.35 00:24:01.975 Starting 1 process 00:24:11.951 00:24:11.951 fio_test: (groupid=0, jobs=1): err= 0: pid=75646: Wed Nov 20 15:35:57 2024 00:24:11.951 write: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(523MiB/10001msec); 0 zone resets 00:24:11.951 clat (usec): min=42, max=10108, avg=73.75, stdev=166.80 00:24:11.951 lat (usec): min=43, max=10109, avg=74.24, stdev=166.83 00:24:11.951 clat percentiles (usec): 00:24:11.951 | 1.00th=[ 59], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:24:11.951 | 30.00th=[ 63], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:24:11.952 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 74], 95.00th=[ 77], 00:24:11.952 | 99.00th=[ 89], 99.50th=[ 105], 99.90th=[ 3458], 99.95th=[ 3752], 00:24:11.952 | 99.99th=[ 4178] 00:24:11.952 bw ( KiB/s): min=23360, max=58464, per=99.64%, avg=53372.37, stdev=10416.38, samples=19 00:24:11.952 iops : min= 5840, max=14616, avg=13343.05, stdev=2604.08, samples=19 00:24:11.952 lat (usec) : 50=0.48%, 100=98.95%, 250=0.23%, 500=0.02%, 750=0.02% 00:24:11.952 lat (usec) : 1000=0.02% 00:24:11.952 lat (msec) : 2=0.07%, 4=0.19%, 10=0.03%, 20=0.01% 00:24:11.952 cpu : usr=2.99%, sys=9.46%, ctx=133925, majf=0, minf=796 00:24:11.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.952 issued rwts: total=0,133921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:11.952 00:24:11.952 Run status group 0 (all jobs): 00:24:11.952 WRITE: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=523MiB (549MB), run=10001-10001msec 00:24:11.952 00:24:11.952 Disk stats (read/write): 00:24:11.952 ublkb0: ios=0/132434, merge=0/0, ticks=0/8681, in_queue=8681, util=98.92% 00:24:11.952 15:35:57 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:11.952 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.952 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 [2024-11-20 15:35:57.876710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:12.211 [2024-11-20 15:35:57.914626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:12.211 [2024-11-20 15:35:57.915369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:12.211 [2024-11-20 15:35:57.922630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:12.211 [2024-11-20 15:35:57.922935] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:12.211 [2024-11-20 15:35:57.922956] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.211 15:35:57 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 [2024-11-20 15:35:57.946675] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:12.211 request: 00:24:12.211 { 00:24:12.211 "ublk_id": 0, 00:24:12.211 "method": "ublk_stop_disk", 00:24:12.211 "req_id": 1 00:24:12.211 } 00:24:12.211 Got JSON-RPC error response 00:24:12.211 response: 00:24:12.211 { 00:24:12.211 "code": -19, 00:24:12.211 "message": "No such device" 00:24:12.211 } 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:12.211 15:35:57 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 [2024-11-20 15:35:57.962696] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:12.211 [2024-11-20 15:35:57.970596] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:12.211 [2024-11-20 15:35:57.970663] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.211 15:35:57 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.211 15:35:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:12.780 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.780 15:35:58 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:12.780 15:35:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:12.780 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.780 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:12.780 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.780 15:35:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:12.780 15:35:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:13.048 15:35:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:13.048 15:35:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:13.048 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.049 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.049 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.049 15:35:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:13.049 15:35:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:13.049 15:35:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:13.049 00:24:13.049 real 0m11.779s 00:24:13.049 user 0m0.678s 00:24:13.049 sys 0m1.079s 00:24:13.049 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.049 ************************************ 00:24:13.049 END TEST test_create_ublk 00:24:13.049 15:35:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.049 ************************************ 00:24:13.049 15:35:58 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:13.049 15:35:58 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:13.049 15:35:58 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.049 15:35:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.049 ************************************ 00:24:13.049 START TEST test_create_multi_ublk 00:24:13.049 ************************************ 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.049 [2024-11-20 15:35:58.870586] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:13.049 [2024-11-20 15:35:58.873177] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.049 15:35:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.321 [2024-11-20 15:35:59.154759] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:13.321 [2024-11-20 15:35:59.155252] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:13.321 [2024-11-20 15:35:59.155269] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:13.321 [2024-11-20 15:35:59.155283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:13.321 [2024-11-20 15:35:59.164853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:13.321 [2024-11-20 15:35:59.164880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:13.321 [2024-11-20 15:35:59.170601] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:13.321 [2024-11-20 15:35:59.171164] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:13.321 [2024-11-20 15:35:59.180670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.321 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.580 [2024-11-20 15:35:59.476753] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:13.580 [2024-11-20 15:35:59.477218] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:13.580 [2024-11-20 15:35:59.477238] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:13.580 [2024-11-20 15:35:59.477246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:13.580 [2024-11-20 15:35:59.485931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:13.580 [2024-11-20 15:35:59.485954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:13.580 [2024-11-20 15:35:59.492602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:13.580 [2024-11-20 15:35:59.493179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:13.580 [2024-11-20 15:35:59.509598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.580 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 [2024-11-20 15:35:59.808728] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:14.148 [2024-11-20 15:35:59.809187] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:14.148 [2024-11-20 15:35:59.809205] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:14.148 [2024-11-20 15:35:59.809215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:14.148 [2024-11-20 15:35:59.816609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:14.148 [2024-11-20 15:35:59.816633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:14.148 [2024-11-20 15:35:59.824605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:14.148 [2024-11-20 15:35:59.825187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:14.148 [2024-11-20 15:35:59.848611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.148 15:35:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:14.407 [2024-11-20 15:36:00.144755] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:14.407 [2024-11-20 15:36:00.145206] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:14.407 [2024-11-20 15:36:00.145225] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:14.407 [2024-11-20 15:36:00.145233] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:14.407 [2024-11-20 15:36:00.152621] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:14.407 [2024-11-20 15:36:00.152646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:14.407 [2024-11-20 15:36:00.160616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:14.407 [2024-11-20 15:36:00.161188] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:14.407 [2024-11-20 15:36:00.164058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:14.407 { 00:24:14.407 "ublk_device": "/dev/ublkb0", 00:24:14.407 "id": 0, 00:24:14.407 "queue_depth": 512, 00:24:14.407 "num_queues": 4, 00:24:14.407 "bdev_name": "Malloc0" 00:24:14.407 }, 00:24:14.407 { 00:24:14.407 "ublk_device": "/dev/ublkb1", 00:24:14.407 "id": 1, 00:24:14.407 "queue_depth": 512, 00:24:14.407 "num_queues": 4, 00:24:14.407 "bdev_name": "Malloc1" 00:24:14.407 }, 00:24:14.407 { 00:24:14.407 "ublk_device": "/dev/ublkb2", 00:24:14.407 "id": 2, 00:24:14.407 "queue_depth": 512, 00:24:14.407 "num_queues": 4, 00:24:14.407 "bdev_name": "Malloc2" 00:24:14.407 }, 00:24:14.407 { 00:24:14.407 "ublk_device": "/dev/ublkb3", 00:24:14.407 "id": 3, 00:24:14.407 "queue_depth": 512, 00:24:14.407 "num_queues": 4, 00:24:14.407 "bdev_name": "Malloc3" 00:24:14.407 } 00:24:14.407 ]' 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:14.407 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:14.408 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:14.667 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:14.925 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:15.184 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:15.184 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:15.184 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:15.184 15:36:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.184 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:15.184 [2024-11-20 15:36:01.103727] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:15.184 [2024-11-20 15:36:01.138074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:15.184 [2024-11-20 15:36:01.139145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:15.443 [2024-11-20 15:36:01.143661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:15.443 [2024-11-20 15:36:01.143980] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:15.443 [2024-11-20 15:36:01.144001] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:15.443 [2024-11-20 15:36:01.159692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:15.443 [2024-11-20 15:36:01.188982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:15.443 [2024-11-20 15:36:01.190071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:15.443 [2024-11-20 15:36:01.198612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:15.443 [2024-11-20 15:36:01.198891] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:15.443 [2024-11-20 15:36:01.198905] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:15.443 [2024-11-20 15:36:01.213716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:15.443 [2024-11-20 15:36:01.245648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:15.443 [2024-11-20 15:36:01.246426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:15.443 [2024-11-20 15:36:01.253611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:15.443 [2024-11-20 15:36:01.253901] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:15.443 [2024-11-20 15:36:01.253920] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:15.443 [2024-11-20 15:36:01.269712] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:15.443 [2024-11-20 15:36:01.303623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:15.443 [2024-11-20 15:36:01.304367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:15.443 [2024-11-20 15:36:01.304876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:15.443 [2024-11-20 15:36:01.305469] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:15.443 [2024-11-20 15:36:01.305488] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.443 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:15.701 [2024-11-20 15:36:01.579693] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:15.701 [2024-11-20 15:36:01.587590] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:15.701 [2024-11-20 15:36:01.587635] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:15.702 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:15.702 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:15.702 15:36:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:15.702 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.702 15:36:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:16.663 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.663 15:36:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:16.663 15:36:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:16.663 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.663 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:16.923 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.923 15:36:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:16.923 15:36:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:16.923 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.923 15:36:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.182 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.182 15:36:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:17.182 15:36:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:17.182 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.182 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:17.750 00:24:17.750 real 0m4.744s 00:24:17.750 user 0m1.156s 00:24:17.750 sys 0m0.224s 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.750 15:36:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.750 ************************************ 00:24:17.750 END TEST test_create_multi_ublk 00:24:17.750 ************************************ 00:24:17.750 15:36:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:17.750 15:36:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:17.750 15:36:03 ublk -- ublk/ublk.sh@130 -- # killprocess 75599 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@954 -- # '[' -z 75599 ']' 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@958 -- # kill -0 75599 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@959 -- # uname 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75599 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.750 killing process with pid 75599 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75599' 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@973 -- # kill 75599 00:24:17.750 15:36:03 ublk -- common/autotest_common.sh@978 -- # wait 75599 00:24:19.129 [2024-11-20 15:36:04.901196] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:19.129 [2024-11-20 15:36:04.901252] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:20.507 00:24:20.507 real 0m31.054s 00:24:20.507 user 0m44.886s 00:24:20.507 sys 0m10.369s 00:24:20.507 15:36:06 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.507 ************************************ 00:24:20.507 END TEST ublk 00:24:20.507 ************************************ 00:24:20.507 15:36:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.507 15:36:06 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:20.507 15:36:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.507 15:36:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.507 15:36:06 -- common/autotest_common.sh@10 -- # set +x 00:24:20.507 ************************************ 00:24:20.507 START TEST ublk_recovery 00:24:20.507 ************************************ 00:24:20.507 15:36:06 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:20.507 * Looking for test storage... 00:24:20.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:20.508 15:36:06 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.508 15:36:06 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.508 15:36:06 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.508 15:36:06 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.767 15:36:06 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.767 --rc genhtml_branch_coverage=1 00:24:20.767 --rc genhtml_function_coverage=1 00:24:20.767 --rc genhtml_legend=1 00:24:20.767 --rc geninfo_all_blocks=1 00:24:20.767 --rc geninfo_unexecuted_blocks=1 00:24:20.767 00:24:20.767 ' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.767 --rc genhtml_branch_coverage=1 00:24:20.767 --rc genhtml_function_coverage=1 00:24:20.767 --rc genhtml_legend=1 00:24:20.767 --rc geninfo_all_blocks=1 00:24:20.767 --rc geninfo_unexecuted_blocks=1 00:24:20.767 00:24:20.767 ' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.767 --rc genhtml_branch_coverage=1 00:24:20.767 --rc genhtml_function_coverage=1 00:24:20.767 --rc genhtml_legend=1 00:24:20.767 --rc geninfo_all_blocks=1 00:24:20.767 --rc geninfo_unexecuted_blocks=1 00:24:20.767 00:24:20.767 ' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.767 --rc genhtml_branch_coverage=1 00:24:20.767 --rc genhtml_function_coverage=1 00:24:20.767 --rc genhtml_legend=1 00:24:20.767 --rc geninfo_all_blocks=1 00:24:20.767 --rc geninfo_unexecuted_blocks=1 00:24:20.767 00:24:20.767 ' 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:20.767 15:36:06 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76020 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:20.767 15:36:06 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76020 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76020 ']' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.767 15:36:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.767 [2024-11-20 15:36:06.592048] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:20.767 [2024-11-20 15:36:06.592175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76020 ] 00:24:21.026 [2024-11-20 15:36:06.763596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.027 [2024-11-20 15:36:06.883187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.027 [2024-11-20 15:36:06.883199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.964 15:36:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:21.965 15:36:07 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.965 [2024-11-20 15:36:07.780601] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:21.965 [2024-11-20 15:36:07.782946] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.965 15:36:07 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.965 15:36:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 malloc0 00:24:22.224 15:36:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 15:36:07 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:22.224 15:36:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 15:36:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 [2024-11-20 15:36:07.947751] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:22.224 [2024-11-20 15:36:07.947874] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:22.224 [2024-11-20 15:36:07.947890] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:22.224 [2024-11-20 15:36:07.947901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:22.224 [2024-11-20 15:36:07.955642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:22.224 [2024-11-20 15:36:07.955667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:22.224 [2024-11-20 15:36:07.963635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:22.224 [2024-11-20 15:36:07.963783] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:22.224 [2024-11-20 15:36:07.994614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:22.224 1 00:24:22.224 15:36:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 15:36:08 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:23.171 15:36:09 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:23.171 15:36:09 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76065 00:24:23.171 15:36:09 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:23.171 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:23.171 fio-3.35 00:24:23.171 Starting 1 process 00:24:28.447 15:36:14 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76020 00:24:28.447 15:36:14 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:33.722 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76020 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:33.722 15:36:19 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76166 00:24:33.722 15:36:19 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.722 15:36:19 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:33.722 15:36:19 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76166 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76166 ']' 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.722 15:36:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.722 [2024-11-20 15:36:19.161368] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:33.722 [2024-11-20 15:36:19.161639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:24:33.722 [2024-11-20 15:36:19.343341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:33.722 [2024-11-20 15:36:19.512638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.722 [2024-11-20 15:36:19.512649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:34.661 15:36:20 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.661 [2024-11-20 15:36:20.400621] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:34.661 [2024-11-20 15:36:20.403262] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.661 15:36:20 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.661 malloc0 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.661 15:36:20 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.661 [2024-11-20 15:36:20.565772] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:34.661 [2024-11-20 15:36:20.565820] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:34.661 [2024-11-20 15:36:20.565832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:34.661 [2024-11-20 15:36:20.573624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:34.661 [2024-11-20 15:36:20.573653] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:34.661 1 00:24:34.661 15:36:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.662 15:36:20 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76065 00:24:36.037 [2024-11-20 15:36:21.573691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:36.037 [2024-11-20 15:36:21.575634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:36.037 [2024-11-20 15:36:21.575655] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:36.973 [2024-11-20 15:36:22.575693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:36.973 [2024-11-20 15:36:22.579608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:36.973 [2024-11-20 15:36:22.579622] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:37.908 [2024-11-20 15:36:23.581612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:37.908 [2024-11-20 15:36:23.585655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:37.908 [2024-11-20 15:36:23.585674] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:37.908 [2024-11-20 15:36:23.585688] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:37.908 [2024-11-20 15:36:23.585788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:24:59.838 [2024-11-20 15:36:44.310599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:24:59.838 [2024-11-20 15:36:44.317215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:24:59.838 [2024-11-20 15:36:44.324780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:24:59.838 [2024-11-20 15:36:44.324805] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:26.416 00:25:26.416 fio_test: (groupid=0, jobs=1): err= 0: pid=76069: Wed Nov 20 15:37:09 2024 00:25:26.416 read: IOPS=12.2k, BW=47.8MiB/s (50.2MB/s)(2870MiB/60002msec) 00:25:26.416 slat (nsec): min=1901, max=169449, avg=5815.73, stdev=1612.43 00:25:26.416 clat (usec): min=822, max=30323k, avg=5048.43, stdev=276242.59 00:25:26.416 lat (usec): min=827, max=30323k, avg=5054.24, stdev=276242.59 00:25:26.416 clat percentiles (usec): 00:25:26.416 | 1.00th=[ 2008], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2278], 00:25:26.416 | 30.00th=[ 2278], 40.00th=[ 2311], 50.00th=[ 2343], 60.00th=[ 2376], 00:25:26.416 | 70.00th=[ 2376], 80.00th=[ 2442], 90.00th=[ 3097], 95.00th=[ 3818], 00:25:26.416 | 99.00th=[ 5538], 99.50th=[ 6063], 99.90th=[ 7832], 99.95th=[ 8455], 00:25:26.416 | 99.99th=[13435] 00:25:26.416 bw ( KiB/s): min=45024, max=105472, per=100.00%, avg=98041.49, stdev=13150.04, samples=59 00:25:26.416 iops : min=11256, max=26368, avg=24510.36, stdev=3287.51, samples=59 00:25:26.416 write: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(2866MiB/60002msec); 0 zone resets 00:25:26.416 slat (nsec): min=1961, max=151693, avg=5866.05, stdev=1654.78 00:25:26.416 clat (usec): min=700, max=30323k, avg=5398.70, stdev=289717.95 00:25:26.416 lat (usec): min=705, max=30323k, avg=5404.57, stdev=289717.95 00:25:26.416 clat percentiles (usec): 00:25:26.416 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:25:26.416 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:25:26.416 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3228], 95.00th=[ 3818], 00:25:26.416 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8586], 00:25:26.416 | 99.99th=[13698] 00:25:26.416 bw ( KiB/s): min=46456, max=105144, per=100.00%, avg=97932.53, stdev=12916.25, samples=59 00:25:26.416 iops : min=11614, max=26286, avg=24483.12, stdev=3229.06, samples=59 00:25:26.416 lat (usec) : 750=0.01%, 1000=0.01% 00:25:26.416 lat (msec) : 2=0.85%, 4=94.77%, 10=4.34%, 20=0.03%, >=2000=0.01% 00:25:26.416 cpu : usr=5.20%, sys=14.31%, ctx=50733, majf=0, minf=13 00:25:26.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:26.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:26.416 issued rwts: total=734780,733718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:26.416 00:25:26.416 Run status group 0 (all jobs): 00:25:26.416 READ: bw=47.8MiB/s (50.2MB/s), 47.8MiB/s-47.8MiB/s (50.2MB/s-50.2MB/s), io=2870MiB (3010MB), run=60002-60002msec 00:25:26.416 WRITE: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=2866MiB (3005MB), run=60002-60002msec 00:25:26.416 00:25:26.416 Disk stats (read/write): 00:25:26.416 ublkb1: ios=731949/730946, merge=0/0, ticks=3657280/3833769, in_queue=7491050, util=99.92% 00:25:26.416 15:37:09 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 15:37:09.286102] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:26.416 [2024-11-20 15:37:09.312697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:26.416 [2024-11-20 15:37:09.312881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:26.416 [2024-11-20 15:37:09.322606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:26.416 [2024-11-20 15:37:09.322730] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:26.416 [2024-11-20 15:37:09.322743] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 15:37:09 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 15:37:09.338721] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:26.416 [2024-11-20 15:37:09.345608] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:26.416 [2024-11-20 15:37:09.345652] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 15:37:09 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:26.416 15:37:09 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:26.416 15:37:09 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76166 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76166 ']' 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76166 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76166 00:25:26.416 killing process with pid 76166 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76166' 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76166 00:25:26.416 15:37:09 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76166 00:25:26.416 [2024-11-20 15:37:10.982385] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:26.416 [2024-11-20 15:37:10.982462] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:26.725 00:25:26.725 real 1m6.111s 00:25:26.725 user 1m52.756s 00:25:26.725 sys 0m20.463s 00:25:26.725 15:37:12 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.725 15:37:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.725 ************************************ 00:25:26.725 END TEST ublk_recovery 00:25:26.725 ************************************ 00:25:26.725 15:37:12 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:26.725 15:37:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:26.725 15:37:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.725 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:25:26.725 15:37:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:26.725 15:37:12 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:26.725 15:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:26.725 15:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.725 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:25:26.725 ************************************ 00:25:26.725 START TEST ftl 00:25:26.725 ************************************ 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:26.725 * Looking for test storage... 00:25:26.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.725 15:37:12 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.725 15:37:12 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.725 15:37:12 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.725 15:37:12 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.725 15:37:12 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.725 15:37:12 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:26.725 15:37:12 ftl -- scripts/common.sh@345 -- # : 1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.725 15:37:12 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.725 15:37:12 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@353 -- # local d=1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.725 15:37:12 ftl -- scripts/common.sh@355 -- # echo 1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.725 15:37:12 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@353 -- # local d=2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.725 15:37:12 ftl -- scripts/common.sh@355 -- # echo 2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.725 15:37:12 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.725 15:37:12 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.725 15:37:12 ftl -- scripts/common.sh@368 -- # return 0 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.725 15:37:12 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:26.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.725 --rc genhtml_branch_coverage=1 00:25:26.726 --rc genhtml_function_coverage=1 00:25:26.726 --rc genhtml_legend=1 00:25:26.726 --rc geninfo_all_blocks=1 00:25:26.726 --rc geninfo_unexecuted_blocks=1 00:25:26.726 00:25:26.726 ' 00:25:26.726 15:37:12 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.726 --rc genhtml_branch_coverage=1 00:25:26.726 --rc genhtml_function_coverage=1 00:25:26.726 --rc genhtml_legend=1 00:25:26.726 --rc geninfo_all_blocks=1 00:25:26.726 --rc geninfo_unexecuted_blocks=1 00:25:26.726 00:25:26.726 ' 00:25:26.726 15:37:12 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.726 --rc genhtml_branch_coverage=1 00:25:26.726 --rc genhtml_function_coverage=1 00:25:26.726 --rc genhtml_legend=1 00:25:26.726 --rc geninfo_all_blocks=1 00:25:26.726 --rc geninfo_unexecuted_blocks=1 00:25:26.726 00:25:26.726 ' 00:25:26.726 15:37:12 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.726 --rc genhtml_branch_coverage=1 00:25:26.726 --rc genhtml_function_coverage=1 00:25:26.726 --rc genhtml_legend=1 00:25:26.726 --rc geninfo_all_blocks=1 00:25:26.726 --rc geninfo_unexecuted_blocks=1 00:25:26.726 00:25:26.726 ' 00:25:26.726 15:37:12 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:26.985 15:37:12 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:26.985 15:37:12 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.985 15:37:12 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:26.985 15:37:12 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:26.985 15:37:12 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:26.985 15:37:12 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.985 15:37:12 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.985 15:37:12 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.985 15:37:12 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.985 15:37:12 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.985 15:37:12 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:26.985 15:37:12 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:26.985 15:37:12 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.985 15:37:12 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.985 15:37:12 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:26.985 15:37:12 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.985 15:37:12 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:26.985 15:37:12 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.985 15:37:12 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:26.985 15:37:12 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:26.985 15:37:12 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:26.985 15:37:12 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.985 15:37:12 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:26.985 15:37:12 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:27.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:27.501 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:27.501 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:27.501 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:27.501 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:27.501 15:37:13 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76974 00:25:27.501 15:37:13 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:27.501 15:37:13 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76974 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@835 -- # '[' -z 76974 ']' 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.501 15:37:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:27.758 [2024-11-20 15:37:13.485919] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:27.758 [2024-11-20 15:37:13.486097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76974 ] 00:25:27.758 [2024-11-20 15:37:13.675725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.016 [2024-11-20 15:37:13.788806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.583 15:37:14 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.583 15:37:14 ftl -- common/autotest_common.sh@868 -- # return 0 00:25:28.583 15:37:14 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:28.842 15:37:14 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:29.781 15:37:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:29.781 15:37:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:30.348 15:37:16 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:30.348 15:37:16 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:30.348 15:37:16 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@50 -- # break 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:30.610 15:37:16 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:30.878 15:37:16 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:30.878 15:37:16 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:30.878 15:37:16 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:30.878 15:37:16 ftl -- ftl/ftl.sh@63 -- # break 00:25:30.878 15:37:16 ftl -- ftl/ftl.sh@66 -- # killprocess 76974 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@954 -- # '[' -z 76974 ']' 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@958 -- # kill -0 76974 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@959 -- # uname 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76974 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.878 killing process with pid 76974 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76974' 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@973 -- # kill 76974 00:25:30.878 15:37:16 ftl -- common/autotest_common.sh@978 -- # wait 76974 00:25:33.426 15:37:18 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:33.426 15:37:18 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:33.426 15:37:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:33.426 15:37:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.426 15:37:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:33.426 ************************************ 00:25:33.426 START TEST ftl_fio_basic 00:25:33.426 ************************************ 00:25:33.426 15:37:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:33.426 * Looking for test storage... 00:25:33.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.426 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.427 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:33.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.427 --rc genhtml_branch_coverage=1 00:25:33.427 --rc genhtml_function_coverage=1 00:25:33.427 --rc genhtml_legend=1 00:25:33.427 --rc geninfo_all_blocks=1 00:25:33.427 --rc geninfo_unexecuted_blocks=1 00:25:33.427 00:25:33.428 ' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.428 --rc genhtml_branch_coverage=1 00:25:33.428 --rc genhtml_function_coverage=1 00:25:33.428 --rc genhtml_legend=1 00:25:33.428 --rc geninfo_all_blocks=1 00:25:33.428 --rc geninfo_unexecuted_blocks=1 00:25:33.428 00:25:33.428 ' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.428 --rc genhtml_branch_coverage=1 00:25:33.428 --rc genhtml_function_coverage=1 00:25:33.428 --rc genhtml_legend=1 00:25:33.428 --rc geninfo_all_blocks=1 00:25:33.428 --rc geninfo_unexecuted_blocks=1 00:25:33.428 00:25:33.428 ' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:33.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.428 --rc genhtml_branch_coverage=1 00:25:33.428 --rc genhtml_function_coverage=1 00:25:33.428 --rc genhtml_legend=1 00:25:33.428 --rc geninfo_all_blocks=1 00:25:33.428 --rc geninfo_unexecuted_blocks=1 00:25:33.428 00:25:33.428 ' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77123 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77123 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77123 ']' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.428 15:37:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:33.690 [2024-11-20 15:37:19.432822] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:33.690 [2024-11-20 15:37:19.432998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77123 ] 00:25:33.690 [2024-11-20 15:37:19.611167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.949 [2024-11-20 15:37:19.731256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.949 [2024-11-20 15:37:19.731396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.949 [2024-11-20 15:37:19.731428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:34.886 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:35.145 15:37:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:35.404 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:35.404 { 00:25:35.404 "name": "nvme0n1", 00:25:35.404 "aliases": [ 00:25:35.404 "275b3250-cc72-42bb-a8c2-c83c130cb8c2" 00:25:35.404 ], 00:25:35.404 "product_name": "NVMe disk", 00:25:35.404 "block_size": 4096, 00:25:35.404 "num_blocks": 1310720, 00:25:35.404 "uuid": "275b3250-cc72-42bb-a8c2-c83c130cb8c2", 00:25:35.404 "numa_id": -1, 00:25:35.404 "assigned_rate_limits": { 00:25:35.404 "rw_ios_per_sec": 0, 00:25:35.404 "rw_mbytes_per_sec": 0, 00:25:35.404 "r_mbytes_per_sec": 0, 00:25:35.404 "w_mbytes_per_sec": 0 00:25:35.404 }, 00:25:35.404 "claimed": false, 00:25:35.404 "zoned": false, 00:25:35.404 "supported_io_types": { 00:25:35.404 "read": true, 00:25:35.404 "write": true, 00:25:35.405 "unmap": true, 00:25:35.405 "flush": true, 00:25:35.405 "reset": true, 00:25:35.405 "nvme_admin": true, 00:25:35.405 "nvme_io": true, 00:25:35.405 "nvme_io_md": false, 00:25:35.405 "write_zeroes": true, 00:25:35.405 "zcopy": false, 00:25:35.405 "get_zone_info": false, 00:25:35.405 "zone_management": false, 00:25:35.405 "zone_append": false, 00:25:35.405 "compare": true, 00:25:35.405 "compare_and_write": false, 00:25:35.405 "abort": true, 00:25:35.405 "seek_hole": false, 00:25:35.405 "seek_data": false, 00:25:35.405 "copy": true, 00:25:35.405 "nvme_iov_md": false 00:25:35.405 }, 00:25:35.405 "driver_specific": { 00:25:35.405 "nvme": [ 00:25:35.405 { 00:25:35.405 "pci_address": "0000:00:11.0", 00:25:35.405 "trid": { 00:25:35.405 "trtype": "PCIe", 00:25:35.405 "traddr": "0000:00:11.0" 00:25:35.405 }, 00:25:35.405 "ctrlr_data": { 00:25:35.405 "cntlid": 0, 00:25:35.405 "vendor_id": "0x1b36", 00:25:35.405 "model_number": "QEMU NVMe Ctrl", 00:25:35.405 "serial_number": "12341", 00:25:35.405 "firmware_revision": "8.0.0", 00:25:35.405 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:35.405 "oacs": { 00:25:35.405 "security": 0, 00:25:35.405 "format": 1, 00:25:35.405 "firmware": 0, 00:25:35.405 "ns_manage": 1 00:25:35.405 }, 00:25:35.405 "multi_ctrlr": false, 00:25:35.405 "ana_reporting": false 00:25:35.405 }, 00:25:35.405 "vs": { 00:25:35.405 "nvme_version": "1.4" 00:25:35.405 }, 00:25:35.405 "ns_data": { 00:25:35.405 "id": 1, 00:25:35.405 "can_share": false 00:25:35.405 } 00:25:35.405 } 00:25:35.405 ], 00:25:35.405 "mp_policy": "active_passive" 00:25:35.405 } 00:25:35.405 } 00:25:35.405 ]' 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:35.405 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:35.664 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:35.664 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:35.923 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0645a590-db81-4cde-9f3c-a1bd15ad97d4 00:25:35.923 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0645a590-db81-4cde-9f3c-a1bd15ad97d4 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:36.182 15:37:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:36.441 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:36.441 { 00:25:36.441 "name": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:36.442 "aliases": [ 00:25:36.442 "lvs/nvme0n1p0" 00:25:36.442 ], 00:25:36.442 "product_name": "Logical Volume", 00:25:36.442 "block_size": 4096, 00:25:36.442 "num_blocks": 26476544, 00:25:36.442 "uuid": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:36.442 "assigned_rate_limits": { 00:25:36.442 "rw_ios_per_sec": 0, 00:25:36.442 "rw_mbytes_per_sec": 0, 00:25:36.442 "r_mbytes_per_sec": 0, 00:25:36.442 "w_mbytes_per_sec": 0 00:25:36.442 }, 00:25:36.442 "claimed": false, 00:25:36.442 "zoned": false, 00:25:36.442 "supported_io_types": { 00:25:36.442 "read": true, 00:25:36.442 "write": true, 00:25:36.442 "unmap": true, 00:25:36.442 "flush": false, 00:25:36.442 "reset": true, 00:25:36.442 "nvme_admin": false, 00:25:36.442 "nvme_io": false, 00:25:36.442 "nvme_io_md": false, 00:25:36.442 "write_zeroes": true, 00:25:36.442 "zcopy": false, 00:25:36.442 "get_zone_info": false, 00:25:36.442 "zone_management": false, 00:25:36.442 "zone_append": false, 00:25:36.442 "compare": false, 00:25:36.442 "compare_and_write": false, 00:25:36.442 "abort": false, 00:25:36.442 "seek_hole": true, 00:25:36.442 "seek_data": true, 00:25:36.442 "copy": false, 00:25:36.442 "nvme_iov_md": false 00:25:36.442 }, 00:25:36.442 "driver_specific": { 00:25:36.442 "lvol": { 00:25:36.442 "lvol_store_uuid": "0645a590-db81-4cde-9f3c-a1bd15ad97d4", 00:25:36.442 "base_bdev": "nvme0n1", 00:25:36.442 "thin_provision": true, 00:25:36.442 "num_allocated_clusters": 0, 00:25:36.442 "snapshot": false, 00:25:36.442 "clone": false, 00:25:36.442 "esnap_clone": false 00:25:36.442 } 00:25:36.442 } 00:25:36.442 } 00:25:36.442 ]' 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:36.442 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:37.010 { 00:25:37.010 "name": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:37.010 "aliases": [ 00:25:37.010 "lvs/nvme0n1p0" 00:25:37.010 ], 00:25:37.010 "product_name": "Logical Volume", 00:25:37.010 "block_size": 4096, 00:25:37.010 "num_blocks": 26476544, 00:25:37.010 "uuid": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:37.010 "assigned_rate_limits": { 00:25:37.010 "rw_ios_per_sec": 0, 00:25:37.010 "rw_mbytes_per_sec": 0, 00:25:37.010 "r_mbytes_per_sec": 0, 00:25:37.010 "w_mbytes_per_sec": 0 00:25:37.010 }, 00:25:37.010 "claimed": false, 00:25:37.010 "zoned": false, 00:25:37.010 "supported_io_types": { 00:25:37.010 "read": true, 00:25:37.010 "write": true, 00:25:37.010 "unmap": true, 00:25:37.010 "flush": false, 00:25:37.010 "reset": true, 00:25:37.010 "nvme_admin": false, 00:25:37.010 "nvme_io": false, 00:25:37.010 "nvme_io_md": false, 00:25:37.010 "write_zeroes": true, 00:25:37.010 "zcopy": false, 00:25:37.010 "get_zone_info": false, 00:25:37.010 "zone_management": false, 00:25:37.010 "zone_append": false, 00:25:37.010 "compare": false, 00:25:37.010 "compare_and_write": false, 00:25:37.010 "abort": false, 00:25:37.010 "seek_hole": true, 00:25:37.010 "seek_data": true, 00:25:37.010 "copy": false, 00:25:37.010 "nvme_iov_md": false 00:25:37.010 }, 00:25:37.010 "driver_specific": { 00:25:37.010 "lvol": { 00:25:37.010 "lvol_store_uuid": "0645a590-db81-4cde-9f3c-a1bd15ad97d4", 00:25:37.010 "base_bdev": "nvme0n1", 00:25:37.010 "thin_provision": true, 00:25:37.010 "num_allocated_clusters": 0, 00:25:37.010 "snapshot": false, 00:25:37.010 "clone": false, 00:25:37.010 "esnap_clone": false 00:25:37.010 } 00:25:37.010 } 00:25:37.010 } 00:25:37.010 ]' 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:37.010 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:37.011 15:37:22 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:37.270 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:37.270 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 00:25:37.528 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:37.528 { 00:25:37.528 "name": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:37.528 "aliases": [ 00:25:37.528 "lvs/nvme0n1p0" 00:25:37.528 ], 00:25:37.529 "product_name": "Logical Volume", 00:25:37.529 "block_size": 4096, 00:25:37.529 "num_blocks": 26476544, 00:25:37.529 "uuid": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:37.529 "assigned_rate_limits": { 00:25:37.529 "rw_ios_per_sec": 0, 00:25:37.529 "rw_mbytes_per_sec": 0, 00:25:37.529 "r_mbytes_per_sec": 0, 00:25:37.529 "w_mbytes_per_sec": 0 00:25:37.529 }, 00:25:37.529 "claimed": false, 00:25:37.529 "zoned": false, 00:25:37.529 "supported_io_types": { 00:25:37.529 "read": true, 00:25:37.529 "write": true, 00:25:37.529 "unmap": true, 00:25:37.529 "flush": false, 00:25:37.529 "reset": true, 00:25:37.529 "nvme_admin": false, 00:25:37.529 "nvme_io": false, 00:25:37.529 "nvme_io_md": false, 00:25:37.529 "write_zeroes": true, 00:25:37.529 "zcopy": false, 00:25:37.529 "get_zone_info": false, 00:25:37.529 "zone_management": false, 00:25:37.529 "zone_append": false, 00:25:37.529 "compare": false, 00:25:37.529 "compare_and_write": false, 00:25:37.529 "abort": false, 00:25:37.529 "seek_hole": true, 00:25:37.529 "seek_data": true, 00:25:37.529 "copy": false, 00:25:37.529 "nvme_iov_md": false 00:25:37.529 }, 00:25:37.529 "driver_specific": { 00:25:37.529 "lvol": { 00:25:37.529 "lvol_store_uuid": "0645a590-db81-4cde-9f3c-a1bd15ad97d4", 00:25:37.529 "base_bdev": "nvme0n1", 00:25:37.529 "thin_provision": true, 00:25:37.529 "num_allocated_clusters": 0, 00:25:37.529 "snapshot": false, 00:25:37.529 "clone": false, 00:25:37.529 "esnap_clone": false 00:25:37.529 } 00:25:37.529 } 00:25:37.529 } 00:25:37.529 ]' 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:37.529 15:37:23 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 67c5ea6d-cb1e-4820-94ee-c6d990af2a84 -c nvc0n1p0 --l2p_dram_limit 60 00:25:37.789 [2024-11-20 15:37:23.614466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.614523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.789 [2024-11-20 15:37:23.614543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.789 [2024-11-20 15:37:23.614554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.614658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.614676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.789 [2024-11-20 15:37:23.614690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:37.789 [2024-11-20 15:37:23.614700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.614733] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.789 [2024-11-20 15:37:23.615775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.789 [2024-11-20 15:37:23.615814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.615826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.789 [2024-11-20 15:37:23.615840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:25:37.789 [2024-11-20 15:37:23.615850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.615984] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fe655d9a-57af-4a8b-b2f8-27f6526e63f7 00:25:37.789 [2024-11-20 15:37:23.617524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.617559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:37.789 [2024-11-20 15:37:23.617581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:37.789 [2024-11-20 15:37:23.617610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.625159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.625194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.789 [2024-11-20 15:37:23.625207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.473 ms 00:25:37.789 [2024-11-20 15:37:23.625220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.625339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.625355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.789 [2024-11-20 15:37:23.625367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:37.789 [2024-11-20 15:37:23.625384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.625455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.625470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.789 [2024-11-20 15:37:23.625480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:37.789 [2024-11-20 15:37:23.625493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.625539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.789 [2024-11-20 15:37:23.630670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.630829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.789 [2024-11-20 15:37:23.630856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.147 ms 00:25:37.789 [2024-11-20 15:37:23.630870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.630920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.630932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.789 [2024-11-20 15:37:23.630945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:37.789 [2024-11-20 15:37:23.630956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.631019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:37.789 [2024-11-20 15:37:23.631173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.789 [2024-11-20 15:37:23.631201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.789 [2024-11-20 15:37:23.631215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:37.789 [2024-11-20 15:37:23.631231] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631244] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631258] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:37.789 [2024-11-20 15:37:23.631268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.789 [2024-11-20 15:37:23.631280] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.789 [2024-11-20 15:37:23.631290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.789 [2024-11-20 15:37:23.631303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.631317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.789 [2024-11-20 15:37:23.631331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:37.789 [2024-11-20 15:37:23.631342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.631476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.789 [2024-11-20 15:37:23.631490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.789 [2024-11-20 15:37:23.631504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:37.789 [2024-11-20 15:37:23.631514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.789 [2024-11-20 15:37:23.631634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.789 [2024-11-20 15:37:23.631647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.789 [2024-11-20 15:37:23.631663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.789 [2024-11-20 15:37:23.631697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.789 [2024-11-20 15:37:23.631731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.789 [2024-11-20 15:37:23.631752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.789 [2024-11-20 15:37:23.631762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:37.789 [2024-11-20 15:37:23.631774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.789 [2024-11-20 15:37:23.631783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.789 [2024-11-20 15:37:23.631795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:37.789 [2024-11-20 15:37:23.631804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.789 [2024-11-20 15:37:23.631829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.789 [2024-11-20 15:37:23.631861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.789 [2024-11-20 15:37:23.631891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.789 [2024-11-20 15:37:23.631924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.789 [2024-11-20 15:37:23.631954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.789 [2024-11-20 15:37:23.631975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.789 [2024-11-20 15:37:23.631989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:37.789 [2024-11-20 15:37:23.631998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.789 [2024-11-20 15:37:23.632010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.789 [2024-11-20 15:37:23.632032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:37.789 [2024-11-20 15:37:23.632044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.789 [2024-11-20 15:37:23.632053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.789 [2024-11-20 15:37:23.632065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:37.789 [2024-11-20 15:37:23.632074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.632086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.789 [2024-11-20 15:37:23.632095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:37.789 [2024-11-20 15:37:23.632109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.789 [2024-11-20 15:37:23.632118] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.790 [2024-11-20 15:37:23.632133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.790 [2024-11-20 15:37:23.632144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.790 [2024-11-20 15:37:23.632158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.790 [2024-11-20 15:37:23.632168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.790 [2024-11-20 15:37:23.632183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.790 [2024-11-20 15:37:23.632192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.790 [2024-11-20 15:37:23.632205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.790 [2024-11-20 15:37:23.632214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.790 [2024-11-20 15:37:23.632226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.790 [2024-11-20 15:37:23.632240] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.790 [2024-11-20 15:37:23.632256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:37.790 [2024-11-20 15:37:23.632282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:37.790 [2024-11-20 15:37:23.632292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:37.790 [2024-11-20 15:37:23.632305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:37.790 [2024-11-20 15:37:23.632316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:37.790 [2024-11-20 15:37:23.632329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:37.790 [2024-11-20 15:37:23.632340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:37.790 [2024-11-20 15:37:23.632353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:37.790 [2024-11-20 15:37:23.632363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:37.790 [2024-11-20 15:37:23.632378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:37.790 [2024-11-20 15:37:23.632437] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.790 [2024-11-20 15:37:23.632451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.790 [2024-11-20 15:37:23.632477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.790 [2024-11-20 15:37:23.632488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.790 [2024-11-20 15:37:23.632501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.790 [2024-11-20 15:37:23.632512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.790 [2024-11-20 15:37:23.632526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.790 [2024-11-20 15:37:23.632536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:25:37.790 [2024-11-20 15:37:23.632548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.790 [2024-11-20 15:37:23.632619] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:37.790 [2024-11-20 15:37:23.632642] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:41.980 [2024-11-20 15:37:27.188040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.188106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:41.980 [2024-11-20 15:37:27.188123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3555.396 ms 00:25:41.980 [2024-11-20 15:37:27.188138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.980 [2024-11-20 15:37:27.228766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.228821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.980 [2024-11-20 15:37:27.228839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.260 ms 00:25:41.980 [2024-11-20 15:37:27.228855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.980 [2024-11-20 15:37:27.229028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.229051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:41.980 [2024-11-20 15:37:27.229063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:41.980 [2024-11-20 15:37:27.229082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.980 [2024-11-20 15:37:27.291244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.291498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.980 [2024-11-20 15:37:27.291526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.107 ms 00:25:41.980 [2024-11-20 15:37:27.291542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.980 [2024-11-20 15:37:27.291609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.291624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.980 [2024-11-20 15:37:27.291635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:41.980 [2024-11-20 15:37:27.291656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.980 [2024-11-20 15:37:27.292169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.980 [2024-11-20 15:37:27.292191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.980 [2024-11-20 15:37:27.292202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:41.980 [2024-11-20 15:37:27.292218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.292344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.292360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.981 [2024-11-20 15:37:27.292372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:41.981 [2024-11-20 15:37:27.292387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.314030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.314228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.981 [2024-11-20 15:37:27.314251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.614 ms 00:25:41.981 [2024-11-20 15:37:27.314265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.327454] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:41.981 [2024-11-20 15:37:27.344183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.344259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:41.981 [2024-11-20 15:37:27.344278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.773 ms 00:25:41.981 [2024-11-20 15:37:27.344291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.421201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.421263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:41.981 [2024-11-20 15:37:27.421288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.849 ms 00:25:41.981 [2024-11-20 15:37:27.421299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.421511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.421526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:41.981 [2024-11-20 15:37:27.421543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:25:41.981 [2024-11-20 15:37:27.421553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.458591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.458633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:41.981 [2024-11-20 15:37:27.458649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.928 ms 00:25:41.981 [2024-11-20 15:37:27.458660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.495012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.495049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:41.981 [2024-11-20 15:37:27.495067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.313 ms 00:25:41.981 [2024-11-20 15:37:27.495077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.495841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.495868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:41.981 [2024-11-20 15:37:27.495884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:25:41.981 [2024-11-20 15:37:27.495895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.611073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.611305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:41.981 [2024-11-20 15:37:27.611340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.091 ms 00:25:41.981 [2024-11-20 15:37:27.611355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.650757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.650816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:41.981 [2024-11-20 15:37:27.650851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.292 ms 00:25:41.981 [2024-11-20 15:37:27.650862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.687879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.687920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:41.981 [2024-11-20 15:37:27.687938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.963 ms 00:25:41.981 [2024-11-20 15:37:27.687948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.725312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.725351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:41.981 [2024-11-20 15:37:27.725369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.308 ms 00:25:41.981 [2024-11-20 15:37:27.725379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.725439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.725451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:41.981 [2024-11-20 15:37:27.725471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:41.981 [2024-11-20 15:37:27.725481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.725658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.981 [2024-11-20 15:37:27.725676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:41.981 [2024-11-20 15:37:27.725689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:41.981 [2024-11-20 15:37:27.725700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.981 [2024-11-20 15:37:27.726978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4111.966 ms, result 0 00:25:41.981 { 00:25:41.981 "name": "ftl0", 00:25:41.981 "uuid": "fe655d9a-57af-4a8b-b2f8-27f6526e63f7" 00:25:41.981 } 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:41.981 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:42.240 15:37:27 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:42.240 [ 00:25:42.240 { 00:25:42.240 "name": "ftl0", 00:25:42.240 "aliases": [ 00:25:42.240 "fe655d9a-57af-4a8b-b2f8-27f6526e63f7" 00:25:42.240 ], 00:25:42.240 "product_name": "FTL disk", 00:25:42.240 "block_size": 4096, 00:25:42.240 "num_blocks": 20971520, 00:25:42.240 "uuid": "fe655d9a-57af-4a8b-b2f8-27f6526e63f7", 00:25:42.240 "assigned_rate_limits": { 00:25:42.240 "rw_ios_per_sec": 0, 00:25:42.240 "rw_mbytes_per_sec": 0, 00:25:42.240 "r_mbytes_per_sec": 0, 00:25:42.240 "w_mbytes_per_sec": 0 00:25:42.240 }, 00:25:42.240 "claimed": false, 00:25:42.240 "zoned": false, 00:25:42.240 "supported_io_types": { 00:25:42.240 "read": true, 00:25:42.240 "write": true, 00:25:42.240 "unmap": true, 00:25:42.240 "flush": true, 00:25:42.240 "reset": false, 00:25:42.240 "nvme_admin": false, 00:25:42.240 "nvme_io": false, 00:25:42.240 "nvme_io_md": false, 00:25:42.240 "write_zeroes": true, 00:25:42.240 "zcopy": false, 00:25:42.240 "get_zone_info": false, 00:25:42.240 "zone_management": false, 00:25:42.240 "zone_append": false, 00:25:42.240 "compare": false, 00:25:42.240 "compare_and_write": false, 00:25:42.240 "abort": false, 00:25:42.240 "seek_hole": false, 00:25:42.240 "seek_data": false, 00:25:42.240 "copy": false, 00:25:42.240 "nvme_iov_md": false 00:25:42.240 }, 00:25:42.240 "driver_specific": { 00:25:42.240 "ftl": { 00:25:42.240 "base_bdev": "67c5ea6d-cb1e-4820-94ee-c6d990af2a84", 00:25:42.240 "cache": "nvc0n1p0" 00:25:42.240 } 00:25:42.240 } 00:25:42.240 } 00:25:42.240 ] 00:25:42.240 15:37:28 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:42.240 15:37:28 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:42.240 15:37:28 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:42.809 15:37:28 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:42.809 15:37:28 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:42.809 [2024-11-20 15:37:28.719688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.809 [2024-11-20 15:37:28.719760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.809 [2024-11-20 15:37:28.719792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.809 [2024-11-20 15:37:28.719805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.809 [2024-11-20 15:37:28.719848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.809 [2024-11-20 15:37:28.724228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.809 [2024-11-20 15:37:28.724395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.809 [2024-11-20 15:37:28.724427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:25:42.809 [2024-11-20 15:37:28.724442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.809 [2024-11-20 15:37:28.724943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.809 [2024-11-20 15:37:28.724967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.809 [2024-11-20 15:37:28.724982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:25:42.809 [2024-11-20 15:37:28.724993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.809 [2024-11-20 15:37:28.727592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.809 [2024-11-20 15:37:28.727621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.809 [2024-11-20 15:37:28.727636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:25:42.809 [2024-11-20 15:37:28.727647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.809 [2024-11-20 15:37:28.732747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.809 [2024-11-20 15:37:28.732782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.809 [2024-11-20 15:37:28.732797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:25:42.809 [2024-11-20 15:37:28.732806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.770236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.770393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:43.070 [2024-11-20 15:37:28.770422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.328 ms 00:25:43.070 [2024-11-20 15:37:28.770433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.792866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.793042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:43.070 [2024-11-20 15:37:28.793073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.361 ms 00:25:43.070 [2024-11-20 15:37:28.793084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.793301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.793319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:43.070 [2024-11-20 15:37:28.793333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:25:43.070 [2024-11-20 15:37:28.793343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.830803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.830842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:43.070 [2024-11-20 15:37:28.830867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.422 ms 00:25:43.070 [2024-11-20 15:37:28.830878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.867374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.867412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:43.070 [2024-11-20 15:37:28.867429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.439 ms 00:25:43.070 [2024-11-20 15:37:28.867438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.904314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.904352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:43.070 [2024-11-20 15:37:28.904368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.819 ms 00:25:43.070 [2024-11-20 15:37:28.904378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.940820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.070 [2024-11-20 15:37:28.940855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:43.070 [2024-11-20 15:37:28.940871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.308 ms 00:25:43.070 [2024-11-20 15:37:28.940881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.070 [2024-11-20 15:37:28.940928] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:43.070 [2024-11-20 15:37:28.940944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.940959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.940970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.940983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.940993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:43.070 [2024-11-20 15:37:28.941451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.941987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:43.071 [2024-11-20 15:37:28.942233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:43.071 [2024-11-20 15:37:28.942245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe655d9a-57af-4a8b-b2f8-27f6526e63f7 00:25:43.071 [2024-11-20 15:37:28.942256] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:43.071 [2024-11-20 15:37:28.942271] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:43.071 [2024-11-20 15:37:28.942282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:43.071 [2024-11-20 15:37:28.942298] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:43.071 [2024-11-20 15:37:28.942307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:43.071 [2024-11-20 15:37:28.942320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:43.071 [2024-11-20 15:37:28.942330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:43.071 [2024-11-20 15:37:28.942341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:43.071 [2024-11-20 15:37:28.942350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:43.071 [2024-11-20 15:37:28.942362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.071 [2024-11-20 15:37:28.942372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:43.071 [2024-11-20 15:37:28.942387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:25:43.071 [2024-11-20 15:37:28.942397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.071 [2024-11-20 15:37:28.962830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.072 [2024-11-20 15:37:28.962867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:43.072 [2024-11-20 15:37:28.962883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.366 ms 00:25:43.072 [2024-11-20 15:37:28.962893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.072 [2024-11-20 15:37:28.963460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.072 [2024-11-20 15:37:28.963474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:43.072 [2024-11-20 15:37:28.963487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:25:43.072 [2024-11-20 15:37:28.963497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.036330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.036371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.332 [2024-11-20 15:37:29.036388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.036399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.036473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.036484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.332 [2024-11-20 15:37:29.036497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.036507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.036656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.036674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.332 [2024-11-20 15:37:29.036687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.036697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.036736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.036747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.332 [2024-11-20 15:37:29.036760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.036770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.172196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.172253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.332 [2024-11-20 15:37:29.172287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.172298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.274466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.274747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.332 [2024-11-20 15:37:29.274777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.274800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.274925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.274938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.332 [2024-11-20 15:37:29.274956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.274966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.275061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.332 [2024-11-20 15:37:29.275074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.275084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.275241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.332 [2024-11-20 15:37:29.275254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.275267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.275337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:43.332 [2024-11-20 15:37:29.275351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.275361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.275423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.332 [2024-11-20 15:37:29.275435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.275445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.332 [2024-11-20 15:37:29.275519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.332 [2024-11-20 15:37:29.275532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.332 [2024-11-20 15:37:29.275542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.332 [2024-11-20 15:37:29.275732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.009 ms, result 0 00:25:43.332 true 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77123 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77123 ']' 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77123 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77123 00:25:43.591 killing process with pid 77123 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77123' 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77123 00:25:43.591 15:37:29 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77123 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:48.865 15:37:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:48.865 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:48.865 fio-3.35 00:25:48.865 Starting 1 thread 00:25:54.133 00:25:54.133 test: (groupid=0, jobs=1): err= 0: pid=77346: Wed Nov 20 15:37:39 2024 00:25:54.133 read: IOPS=973, BW=64.7MiB/s (67.8MB/s)(255MiB/3936msec) 00:25:54.133 slat (nsec): min=4413, max=37304, avg=6769.46, stdev=2696.11 00:25:54.133 clat (usec): min=308, max=830, avg=455.52, stdev=53.36 00:25:54.133 lat (usec): min=313, max=846, avg=462.29, stdev=53.91 00:25:54.133 clat percentiles (usec): 00:25:54.133 | 1.00th=[ 343], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 412], 00:25:54.133 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:25:54.133 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 523], 95.00th=[ 545], 00:25:54.133 | 99.00th=[ 611], 99.50th=[ 685], 99.90th=[ 791], 99.95th=[ 807], 00:25:54.133 | 99.99th=[ 832] 00:25:54.133 write: IOPS=980, BW=65.1MiB/s (68.3MB/s)(256MiB/3932msec); 0 zone resets 00:25:54.133 slat (usec): min=16, max=158, avg=21.22, stdev= 5.24 00:25:54.133 clat (usec): min=350, max=3684, avg=528.08, stdev=87.57 00:25:54.133 lat (usec): min=372, max=3744, avg=549.29, stdev=88.38 00:25:54.133 clat percentiles (usec): 00:25:54.133 | 1.00th=[ 400], 5.00th=[ 433], 10.00th=[ 461], 20.00th=[ 474], 00:25:54.133 | 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 537], 00:25:54.133 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 635], 00:25:54.133 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 1631], 00:25:54.133 | 99.99th=[ 3687] 00:25:54.133 bw ( KiB/s): min=62696, max=69088, per=99.35%, avg=66251.43, stdev=2484.78, samples=7 00:25:54.133 iops : min= 922, max= 1016, avg=974.29, stdev=36.54, samples=7 00:25:54.133 lat (usec) : 500=60.10%, 750=38.87%, 1000=0.99% 00:25:54.133 lat (msec) : 2=0.03%, 4=0.01% 00:25:54.133 cpu : usr=98.91%, sys=0.18%, ctx=35, majf=0, minf=1169 00:25:54.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.133 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:54.133 00:25:54.133 Run status group 0 (all jobs): 00:25:54.133 READ: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=255MiB (267MB), run=3936-3936msec 00:25:54.133 WRITE: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=256MiB (269MB), run=3932-3932msec 00:25:56.041 ----------------------------------------------------- 00:25:56.041 Suppressions used: 00:25:56.041 count bytes template 00:25:56.041 1 5 /usr/src/fio/parse.c 00:25:56.041 1 8 libtcmalloc_minimal.so 00:25:56.041 1 904 libcrypto.so 00:25:56.041 ----------------------------------------------------- 00:25:56.041 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:56.041 15:37:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:56.300 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:56.300 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:56.300 fio-3.35 00:25:56.300 Starting 2 threads 00:26:28.380 00:26:28.380 first_half: (groupid=0, jobs=1): err= 0: pid=77450: Wed Nov 20 15:38:10 2024 00:26:28.380 read: IOPS=2448, BW=9793KiB/s (10.0MB/s)(255MiB/26648msec) 00:26:28.380 slat (nsec): min=3678, max=55250, avg=6597.01, stdev=2278.86 00:26:28.380 clat (usec): min=679, max=313569, avg=39252.83, stdev=21552.26 00:26:28.380 lat (usec): min=697, max=313581, avg=39259.43, stdev=21552.56 00:26:28.380 clat percentiles (msec): 00:26:28.380 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:26:28.380 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:26:28.380 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 46], 00:26:28.380 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 259], 99.95th=[ 284], 00:26:28.380 | 99.99th=[ 305] 00:26:28.380 write: IOPS=2881, BW=11.3MiB/s (11.8MB/s)(256MiB/22744msec); 0 zone resets 00:26:28.380 slat (usec): min=4, max=597, avg= 8.94, stdev= 5.75 00:26:28.380 clat (usec): min=462, max=86954, avg=12874.01, stdev=21026.18 00:26:28.380 lat (usec): min=475, max=86964, avg=12882.95, stdev=21026.31 00:26:28.380 clat percentiles (usec): 00:26:28.380 | 1.00th=[ 881], 5.00th=[ 1139], 10.00th=[ 1319], 20.00th=[ 1663], 00:26:28.380 | 30.00th=[ 3130], 40.00th=[ 4752], 50.00th=[ 6063], 60.00th=[ 7177], 00:26:28.380 | 70.00th=[ 8291], 80.00th=[12780], 90.00th=[36439], 95.00th=[76022], 00:26:28.380 | 99.00th=[82314], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:26:28.380 | 99.99th=[86508] 00:26:28.380 bw ( KiB/s): min= 376, max=38584, per=81.23%, avg=18724.57, stdev=9928.62, samples=28 00:26:28.380 iops : min= 94, max= 9646, avg=4681.14, stdev=2482.16, samples=28 00:26:28.380 lat (usec) : 500=0.01%, 750=0.08%, 1000=1.08% 00:26:28.380 lat (msec) : 2=11.43%, 4=5.27%, 10=21.35%, 20=7.11%, 50=46.91% 00:26:28.380 lat (msec) : 100=5.47%, 250=1.23%, 500=0.06% 00:26:28.380 cpu : usr=99.13%, sys=0.24%, ctx=64, majf=0, minf=5589 00:26:28.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:28.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.380 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:28.380 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:28.380 second_half: (groupid=0, jobs=1): err= 0: pid=77451: Wed Nov 20 15:38:10 2024 00:26:28.380 read: IOPS=2458, BW=9834KiB/s (10.1MB/s)(255MiB/26517msec) 00:26:28.380 slat (nsec): min=3649, max=35618, avg=6672.62, stdev=2184.48 00:26:28.380 clat (usec): min=791, max=319127, avg=39744.69, stdev=19507.01 00:26:28.380 lat (usec): min=800, max=319139, avg=39751.36, stdev=19507.24 00:26:28.380 clat percentiles (msec): 00:26:28.380 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 35], 00:26:28.380 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:26:28.380 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 51], 00:26:28.380 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 215], 00:26:28.380 | 99.99th=[ 313] 00:26:28.380 write: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(256MiB/20677msec); 0 zone resets 00:26:28.380 slat (usec): min=4, max=457, avg= 8.82, stdev= 4.77 00:26:28.380 clat (usec): min=464, max=88073, avg=12216.32, stdev=20756.22 00:26:28.380 lat (usec): min=477, max=88083, avg=12225.13, stdev=20756.30 00:26:28.380 clat percentiles (usec): 00:26:28.380 | 1.00th=[ 922], 5.00th=[ 1156], 10.00th=[ 1319], 20.00th=[ 1614], 00:26:28.380 | 30.00th=[ 2573], 40.00th=[ 4424], 50.00th=[ 5866], 60.00th=[ 6980], 00:26:28.380 | 70.00th=[ 8029], 80.00th=[12256], 90.00th=[16450], 95.00th=[76022], 00:26:28.380 | 99.00th=[82314], 99.50th=[84411], 99.90th=[85459], 99.95th=[86508], 00:26:28.380 | 99.99th=[87557] 00:26:28.380 bw ( KiB/s): min= 1000, max=40328, per=94.78%, avg=21848.12, stdev=11880.98, samples=24 00:26:28.380 iops : min= 250, max=10082, avg=5462.00, stdev=2970.21, samples=24 00:26:28.380 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.85% 00:26:28.380 lat (msec) : 2=13.00%, 4=5.26%, 10=19.24%, 20=8.18%, 50=46.44% 00:26:28.380 lat (msec) : 100=5.63%, 250=1.34%, 500=0.01% 00:26:28.380 cpu : usr=99.11%, sys=0.23%, ctx=40, majf=0, minf=5538 00:26:28.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:28.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.380 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:28.380 issued rwts: total=65191,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:28.380 00:26:28.380 Run status group 0 (all jobs): 00:26:28.380 READ: bw=19.1MiB/s (20.0MB/s), 9793KiB/s-9834KiB/s (10.0MB/s-10.1MB/s), io=510MiB (534MB), run=26517-26648msec 00:26:28.380 WRITE: bw=22.5MiB/s (23.6MB/s), 11.3MiB/s-12.4MiB/s (11.8MB/s-13.0MB/s), io=512MiB (537MB), run=20677-22744msec 00:26:28.380 ----------------------------------------------------- 00:26:28.380 Suppressions used: 00:26:28.380 count bytes template 00:26:28.380 2 10 /usr/src/fio/parse.c 00:26:28.380 2 192 /usr/src/fio/iolog.c 00:26:28.380 1 8 libtcmalloc_minimal.so 00:26:28.380 1 904 libcrypto.so 00:26:28.380 ----------------------------------------------------- 00:26:28.380 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:28.380 15:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:28.381 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:28.381 fio-3.35 00:26:28.381 Starting 1 thread 00:26:43.275 00:26:43.275 test: (groupid=0, jobs=1): err= 0: pid=77791: Wed Nov 20 15:38:27 2024 00:26:43.275 read: IOPS=7275, BW=28.4MiB/s (29.8MB/s)(255MiB/8962msec) 00:26:43.275 slat (nsec): min=3511, max=34606, avg=5885.89, stdev=2061.19 00:26:43.275 clat (usec): min=678, max=34850, avg=17583.46, stdev=1265.59 00:26:43.275 lat (usec): min=683, max=34856, avg=17589.34, stdev=1265.61 00:26:43.275 clat percentiles (usec): 00:26:43.275 | 1.00th=[16319], 5.00th=[16581], 10.00th=[16712], 20.00th=[16909], 00:26:43.275 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:26:43.275 | 70.00th=[17695], 80.00th=[17695], 90.00th=[18482], 95.00th=[20317], 00:26:43.275 | 99.00th=[21365], 99.50th=[24249], 99.90th=[27657], 99.95th=[30802], 00:26:43.275 | 99.99th=[34341] 00:26:43.275 write: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(256MiB/4804msec); 0 zone resets 00:26:43.275 slat (usec): min=4, max=642, avg= 8.35, stdev= 6.23 00:26:43.275 clat (usec): min=585, max=52849, avg=9332.99, stdev=11320.62 00:26:43.275 lat (usec): min=592, max=52856, avg=9341.35, stdev=11320.62 00:26:43.275 clat percentiles (usec): 00:26:43.275 | 1.00th=[ 824], 5.00th=[ 988], 10.00th=[ 1074], 20.00th=[ 1237], 00:26:43.275 | 30.00th=[ 1434], 40.00th=[ 1778], 50.00th=[ 6587], 60.00th=[ 7439], 00:26:43.275 | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[33817], 95.00th=[35390], 00:26:43.275 | 99.00th=[36963], 99.50th=[38011], 99.90th=[41157], 99.95th=[42730], 00:26:43.275 | 99.99th=[51119] 00:26:43.275 bw ( KiB/s): min=28432, max=70560, per=96.08%, avg=52428.80, stdev=11219.41, samples=10 00:26:43.275 iops : min= 7108, max=17640, avg=13107.20, stdev=2804.85, samples=10 00:26:43.275 lat (usec) : 750=0.22%, 1000=2.67% 00:26:43.275 lat (msec) : 2=17.69%, 4=0.53%, 10=18.77%, 20=49.06%, 50=11.07% 00:26:43.275 lat (msec) : 100=0.01% 00:26:43.275 cpu : usr=98.70%, sys=0.45%, ctx=17, majf=0, minf=5565 00:26:43.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:43.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.275 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:43.275 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:43.275 00:26:43.275 Run status group 0 (all jobs): 00:26:43.275 READ: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=255MiB (267MB), run=8962-8962msec 00:26:43.275 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=256MiB (268MB), run=4804-4804msec 00:26:44.211 ----------------------------------------------------- 00:26:44.211 Suppressions used: 00:26:44.211 count bytes template 00:26:44.211 1 5 /usr/src/fio/parse.c 00:26:44.211 2 192 /usr/src/fio/iolog.c 00:26:44.211 1 8 libtcmalloc_minimal.so 00:26:44.211 1 904 libcrypto.so 00:26:44.211 ----------------------------------------------------- 00:26:44.211 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:44.211 Remove shared memory files 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57920 /dev/shm/spdk_tgt_trace.pid76020 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:26:44.211 ************************************ 00:26:44.211 END TEST ftl_fio_basic 00:26:44.211 ************************************ 00:26:44.211 00:26:44.211 real 1m10.993s 00:26:44.211 user 2m35.541s 00:26:44.211 sys 0m4.043s 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.211 15:38:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:44.211 15:38:30 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:44.211 15:38:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:44.211 15:38:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.211 15:38:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:44.211 ************************************ 00:26:44.211 START TEST ftl_bdevperf 00:26:44.211 ************************************ 00:26:44.211 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:44.211 * Looking for test storage... 00:26:44.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.211 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.211 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.211 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.470 --rc genhtml_branch_coverage=1 00:26:44.470 --rc genhtml_function_coverage=1 00:26:44.470 --rc genhtml_legend=1 00:26:44.470 --rc geninfo_all_blocks=1 00:26:44.470 --rc geninfo_unexecuted_blocks=1 00:26:44.470 00:26:44.470 ' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.470 --rc genhtml_branch_coverage=1 00:26:44.470 --rc genhtml_function_coverage=1 00:26:44.470 --rc genhtml_legend=1 00:26:44.470 --rc geninfo_all_blocks=1 00:26:44.470 --rc geninfo_unexecuted_blocks=1 00:26:44.470 00:26:44.470 ' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.470 --rc genhtml_branch_coverage=1 00:26:44.470 --rc genhtml_function_coverage=1 00:26:44.470 --rc genhtml_legend=1 00:26:44.470 --rc geninfo_all_blocks=1 00:26:44.470 --rc geninfo_unexecuted_blocks=1 00:26:44.470 00:26:44.470 ' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.470 --rc genhtml_branch_coverage=1 00:26:44.470 --rc genhtml_function_coverage=1 00:26:44.470 --rc genhtml_legend=1 00:26:44.470 --rc geninfo_all_blocks=1 00:26:44.470 --rc geninfo_unexecuted_blocks=1 00:26:44.470 00:26:44.470 ' 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:44.470 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78036 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78036 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78036 ']' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.471 15:38:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.471 [2024-11-20 15:38:30.366791] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:26:44.471 [2024-11-20 15:38:30.367228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78036 ] 00:26:44.729 [2024-11-20 15:38:30.558473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.730 [2024-11-20 15:38:30.665672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:26:45.299 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:45.868 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:46.128 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.128 { 00:26:46.128 "name": "nvme0n1", 00:26:46.128 "aliases": [ 00:26:46.128 "109997f8-5f7a-4e9b-bf00-118b0872e3c6" 00:26:46.128 ], 00:26:46.128 "product_name": "NVMe disk", 00:26:46.128 "block_size": 4096, 00:26:46.128 "num_blocks": 1310720, 00:26:46.128 "uuid": "109997f8-5f7a-4e9b-bf00-118b0872e3c6", 00:26:46.128 "numa_id": -1, 00:26:46.128 "assigned_rate_limits": { 00:26:46.129 "rw_ios_per_sec": 0, 00:26:46.129 "rw_mbytes_per_sec": 0, 00:26:46.129 "r_mbytes_per_sec": 0, 00:26:46.129 "w_mbytes_per_sec": 0 00:26:46.129 }, 00:26:46.129 "claimed": true, 00:26:46.129 "claim_type": "read_many_write_one", 00:26:46.129 "zoned": false, 00:26:46.129 "supported_io_types": { 00:26:46.129 "read": true, 00:26:46.129 "write": true, 00:26:46.129 "unmap": true, 00:26:46.129 "flush": true, 00:26:46.129 "reset": true, 00:26:46.129 "nvme_admin": true, 00:26:46.129 "nvme_io": true, 00:26:46.129 "nvme_io_md": false, 00:26:46.129 "write_zeroes": true, 00:26:46.129 "zcopy": false, 00:26:46.129 "get_zone_info": false, 00:26:46.129 "zone_management": false, 00:26:46.129 "zone_append": false, 00:26:46.129 "compare": true, 00:26:46.129 "compare_and_write": false, 00:26:46.129 "abort": true, 00:26:46.129 "seek_hole": false, 00:26:46.129 "seek_data": false, 00:26:46.129 "copy": true, 00:26:46.129 "nvme_iov_md": false 00:26:46.129 }, 00:26:46.129 "driver_specific": { 00:26:46.129 "nvme": [ 00:26:46.129 { 00:26:46.129 "pci_address": "0000:00:11.0", 00:26:46.129 "trid": { 00:26:46.129 "trtype": "PCIe", 00:26:46.129 "traddr": "0000:00:11.0" 00:26:46.129 }, 00:26:46.129 "ctrlr_data": { 00:26:46.129 "cntlid": 0, 00:26:46.129 "vendor_id": "0x1b36", 00:26:46.129 "model_number": "QEMU NVMe Ctrl", 00:26:46.129 "serial_number": "12341", 00:26:46.129 "firmware_revision": "8.0.0", 00:26:46.129 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:46.129 "oacs": { 00:26:46.129 "security": 0, 00:26:46.129 "format": 1, 00:26:46.129 "firmware": 0, 00:26:46.129 "ns_manage": 1 00:26:46.129 }, 00:26:46.129 "multi_ctrlr": false, 00:26:46.129 "ana_reporting": false 00:26:46.129 }, 00:26:46.129 "vs": { 00:26:46.129 "nvme_version": "1.4" 00:26:46.129 }, 00:26:46.129 "ns_data": { 00:26:46.129 "id": 1, 00:26:46.129 "can_share": false 00:26:46.129 } 00:26:46.129 } 00:26:46.129 ], 00:26:46.129 "mp_policy": "active_passive" 00:26:46.129 } 00:26:46.129 } 00:26:46.129 ]' 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:46.129 15:38:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:46.388 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0645a590-db81-4cde-9f3c-a1bd15ad97d4 00:26:46.388 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:26:46.388 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0645a590-db81-4cde-9f3c-a1bd15ad97d4 00:26:46.647 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:46.906 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ec5260eb-14d4-4539-b7c7-8ccadf72e4d3 00:26:46.906 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec5260eb-14d4-4539-b7c7-8ccadf72e4d3 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:47.169 15:38:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:47.428 { 00:26:47.428 "name": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:47.428 "aliases": [ 00:26:47.428 "lvs/nvme0n1p0" 00:26:47.428 ], 00:26:47.428 "product_name": "Logical Volume", 00:26:47.428 "block_size": 4096, 00:26:47.428 "num_blocks": 26476544, 00:26:47.428 "uuid": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:47.428 "assigned_rate_limits": { 00:26:47.428 "rw_ios_per_sec": 0, 00:26:47.428 "rw_mbytes_per_sec": 0, 00:26:47.428 "r_mbytes_per_sec": 0, 00:26:47.428 "w_mbytes_per_sec": 0 00:26:47.428 }, 00:26:47.428 "claimed": false, 00:26:47.428 "zoned": false, 00:26:47.428 "supported_io_types": { 00:26:47.428 "read": true, 00:26:47.428 "write": true, 00:26:47.428 "unmap": true, 00:26:47.428 "flush": false, 00:26:47.428 "reset": true, 00:26:47.428 "nvme_admin": false, 00:26:47.428 "nvme_io": false, 00:26:47.428 "nvme_io_md": false, 00:26:47.428 "write_zeroes": true, 00:26:47.428 "zcopy": false, 00:26:47.428 "get_zone_info": false, 00:26:47.428 "zone_management": false, 00:26:47.428 "zone_append": false, 00:26:47.428 "compare": false, 00:26:47.428 "compare_and_write": false, 00:26:47.428 "abort": false, 00:26:47.428 "seek_hole": true, 00:26:47.428 "seek_data": true, 00:26:47.428 "copy": false, 00:26:47.428 "nvme_iov_md": false 00:26:47.428 }, 00:26:47.428 "driver_specific": { 00:26:47.428 "lvol": { 00:26:47.428 "lvol_store_uuid": "ec5260eb-14d4-4539-b7c7-8ccadf72e4d3", 00:26:47.428 "base_bdev": "nvme0n1", 00:26:47.428 "thin_provision": true, 00:26:47.428 "num_allocated_clusters": 0, 00:26:47.428 "snapshot": false, 00:26:47.428 "clone": false, 00:26:47.428 "esnap_clone": false 00:26:47.428 } 00:26:47.428 } 00:26:47.428 } 00:26:47.428 ]' 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:47.428 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:47.429 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:47.429 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:26:47.429 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:26:47.429 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:47.688 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:47.688 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:47.689 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15e27d66-b654-4642-bbf6-89230583db3c 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:47.948 { 00:26:47.948 "name": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:47.948 "aliases": [ 00:26:47.948 "lvs/nvme0n1p0" 00:26:47.948 ], 00:26:47.948 "product_name": "Logical Volume", 00:26:47.948 "block_size": 4096, 00:26:47.948 "num_blocks": 26476544, 00:26:47.948 "uuid": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:47.948 "assigned_rate_limits": { 00:26:47.948 "rw_ios_per_sec": 0, 00:26:47.948 "rw_mbytes_per_sec": 0, 00:26:47.948 "r_mbytes_per_sec": 0, 00:26:47.948 "w_mbytes_per_sec": 0 00:26:47.948 }, 00:26:47.948 "claimed": false, 00:26:47.948 "zoned": false, 00:26:47.948 "supported_io_types": { 00:26:47.948 "read": true, 00:26:47.948 "write": true, 00:26:47.948 "unmap": true, 00:26:47.948 "flush": false, 00:26:47.948 "reset": true, 00:26:47.948 "nvme_admin": false, 00:26:47.948 "nvme_io": false, 00:26:47.948 "nvme_io_md": false, 00:26:47.948 "write_zeroes": true, 00:26:47.948 "zcopy": false, 00:26:47.948 "get_zone_info": false, 00:26:47.948 "zone_management": false, 00:26:47.948 "zone_append": false, 00:26:47.948 "compare": false, 00:26:47.948 "compare_and_write": false, 00:26:47.948 "abort": false, 00:26:47.948 "seek_hole": true, 00:26:47.948 "seek_data": true, 00:26:47.948 "copy": false, 00:26:47.948 "nvme_iov_md": false 00:26:47.948 }, 00:26:47.948 "driver_specific": { 00:26:47.948 "lvol": { 00:26:47.948 "lvol_store_uuid": "ec5260eb-14d4-4539-b7c7-8ccadf72e4d3", 00:26:47.948 "base_bdev": "nvme0n1", 00:26:47.948 "thin_provision": true, 00:26:47.948 "num_allocated_clusters": 0, 00:26:47.948 "snapshot": false, 00:26:47.948 "clone": false, 00:26:47.948 "esnap_clone": false 00:26:47.948 } 00:26:47.948 } 00:26:47.948 } 00:26:47.948 ]' 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:26:47.948 15:38:33 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 15e27d66-b654-4642-bbf6-89230583db3c 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=15e27d66-b654-4642-bbf6-89230583db3c 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:48.207 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15e27d66-b654-4642-bbf6-89230583db3c 00:26:48.466 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:48.467 { 00:26:48.467 "name": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:48.467 "aliases": [ 00:26:48.467 "lvs/nvme0n1p0" 00:26:48.467 ], 00:26:48.467 "product_name": "Logical Volume", 00:26:48.467 "block_size": 4096, 00:26:48.467 "num_blocks": 26476544, 00:26:48.467 "uuid": "15e27d66-b654-4642-bbf6-89230583db3c", 00:26:48.467 "assigned_rate_limits": { 00:26:48.467 "rw_ios_per_sec": 0, 00:26:48.467 "rw_mbytes_per_sec": 0, 00:26:48.467 "r_mbytes_per_sec": 0, 00:26:48.467 "w_mbytes_per_sec": 0 00:26:48.467 }, 00:26:48.467 "claimed": false, 00:26:48.467 "zoned": false, 00:26:48.467 "supported_io_types": { 00:26:48.467 "read": true, 00:26:48.467 "write": true, 00:26:48.467 "unmap": true, 00:26:48.467 "flush": false, 00:26:48.467 "reset": true, 00:26:48.467 "nvme_admin": false, 00:26:48.467 "nvme_io": false, 00:26:48.467 "nvme_io_md": false, 00:26:48.467 "write_zeroes": true, 00:26:48.467 "zcopy": false, 00:26:48.467 "get_zone_info": false, 00:26:48.467 "zone_management": false, 00:26:48.467 "zone_append": false, 00:26:48.467 "compare": false, 00:26:48.467 "compare_and_write": false, 00:26:48.467 "abort": false, 00:26:48.467 "seek_hole": true, 00:26:48.467 "seek_data": true, 00:26:48.467 "copy": false, 00:26:48.467 "nvme_iov_md": false 00:26:48.467 }, 00:26:48.467 "driver_specific": { 00:26:48.467 "lvol": { 00:26:48.467 "lvol_store_uuid": "ec5260eb-14d4-4539-b7c7-8ccadf72e4d3", 00:26:48.467 "base_bdev": "nvme0n1", 00:26:48.467 "thin_provision": true, 00:26:48.467 "num_allocated_clusters": 0, 00:26:48.467 "snapshot": false, 00:26:48.467 "clone": false, 00:26:48.467 "esnap_clone": false 00:26:48.467 } 00:26:48.467 } 00:26:48.467 } 00:26:48.467 ]' 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:26:48.467 15:38:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 15e27d66-b654-4642-bbf6-89230583db3c -c nvc0n1p0 --l2p_dram_limit 20 00:26:48.727 [2024-11-20 15:38:34.546588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.727 [2024-11-20 15:38:34.546637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:48.727 [2024-11-20 15:38:34.546654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:48.727 [2024-11-20 15:38:34.546670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.727 [2024-11-20 15:38:34.546733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.727 [2024-11-20 15:38:34.546763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.727 [2024-11-20 15:38:34.546773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:48.728 [2024-11-20 15:38:34.546786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.546804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:48.728 [2024-11-20 15:38:34.547971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:48.728 [2024-11-20 15:38:34.548139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.548161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.728 [2024-11-20 15:38:34.548174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:26:48.728 [2024-11-20 15:38:34.548187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.548330] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9fe7bfcd-a3a6-466a-a83d-ed51d1fb28e7 00:26:48.728 [2024-11-20 15:38:34.549803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.549841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:48.728 [2024-11-20 15:38:34.549858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:48.728 [2024-11-20 15:38:34.549872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.557355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.557388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.728 [2024-11-20 15:38:34.557403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.426 ms 00:26:48.728 [2024-11-20 15:38:34.557414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.557531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.557546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.728 [2024-11-20 15:38:34.557579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:48.728 [2024-11-20 15:38:34.557590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.557656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.557669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:48.728 [2024-11-20 15:38:34.557683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:48.728 [2024-11-20 15:38:34.557693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.557719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:48.728 [2024-11-20 15:38:34.562447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.562484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.728 [2024-11-20 15:38:34.562496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:26:48.728 [2024-11-20 15:38:34.562512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.562553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.562567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:48.728 [2024-11-20 15:38:34.562587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:48.728 [2024-11-20 15:38:34.562599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.562641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:48.728 [2024-11-20 15:38:34.562787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:48.728 [2024-11-20 15:38:34.562805] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:48.728 [2024-11-20 15:38:34.562822] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:48.728 [2024-11-20 15:38:34.562835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:48.728 [2024-11-20 15:38:34.562850] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:48.728 [2024-11-20 15:38:34.562862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:48.728 [2024-11-20 15:38:34.562874] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:48.728 [2024-11-20 15:38:34.562884] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:48.728 [2024-11-20 15:38:34.562897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:48.728 [2024-11-20 15:38:34.562908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.562923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:48.728 [2024-11-20 15:38:34.562934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:48.728 [2024-11-20 15:38:34.562953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.563025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.728 [2024-11-20 15:38:34.563039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:48.728 [2024-11-20 15:38:34.563050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:48.728 [2024-11-20 15:38:34.563064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.728 [2024-11-20 15:38:34.563147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:48.728 [2024-11-20 15:38:34.563161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:48.728 [2024-11-20 15:38:34.563175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:48.728 [2024-11-20 15:38:34.563210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:48.728 [2024-11-20 15:38:34.563241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:48.728 [2024-11-20 15:38:34.563261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:48.728 [2024-11-20 15:38:34.563273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:48.728 [2024-11-20 15:38:34.563282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:48.728 [2024-11-20 15:38:34.563304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:48.728 [2024-11-20 15:38:34.563314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:48.728 [2024-11-20 15:38:34.563328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:48.728 [2024-11-20 15:38:34.563351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:48.728 [2024-11-20 15:38:34.563381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:48.728 [2024-11-20 15:38:34.563417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:48.728 [2024-11-20 15:38:34.563447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:48.728 [2024-11-20 15:38:34.563480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.728 [2024-11-20 15:38:34.563502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:48.728 [2024-11-20 15:38:34.563512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:48.728 [2024-11-20 15:38:34.563523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:48.728 [2024-11-20 15:38:34.563532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:48.728 [2024-11-20 15:38:34.563543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:48.729 [2024-11-20 15:38:34.563552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:48.729 [2024-11-20 15:38:34.563563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:48.729 [2024-11-20 15:38:34.563830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:48.729 [2024-11-20 15:38:34.563872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.729 [2024-11-20 15:38:34.563904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:48.729 [2024-11-20 15:38:34.563937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:48.729 [2024-11-20 15:38:34.563967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.729 [2024-11-20 15:38:34.564000] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:48.729 [2024-11-20 15:38:34.564031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:48.729 [2024-11-20 15:38:34.564130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:48.729 [2024-11-20 15:38:34.564168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.729 [2024-11-20 15:38:34.564205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:48.729 [2024-11-20 15:38:34.564236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:48.729 [2024-11-20 15:38:34.564270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:48.729 [2024-11-20 15:38:34.564301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:48.729 [2024-11-20 15:38:34.564333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:48.729 [2024-11-20 15:38:34.564364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:48.729 [2024-11-20 15:38:34.564521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:48.729 [2024-11-20 15:38:34.564586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.564646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:48.729 [2024-11-20 15:38:34.564696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:48.729 [2024-11-20 15:38:34.564797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:48.729 [2024-11-20 15:38:34.564851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:48.729 [2024-11-20 15:38:34.564903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:48.729 [2024-11-20 15:38:34.564952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:48.729 [2024-11-20 15:38:34.565003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:48.729 [2024-11-20 15:38:34.565095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:48.729 [2024-11-20 15:38:34.565152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:48.729 [2024-11-20 15:38:34.565202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:48.729 [2024-11-20 15:38:34.565286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:48.729 [2024-11-20 15:38:34.565298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:48.729 [2024-11-20 15:38:34.565323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:48.729 [2024-11-20 15:38:34.565336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:48.729 [2024-11-20 15:38:34.565347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:48.729 [2024-11-20 15:38:34.565362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.729 [2024-11-20 15:38:34.565376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:48.729 [2024-11-20 15:38:34.565390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:26:48.729 [2024-11-20 15:38:34.565400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.729 [2024-11-20 15:38:34.565449] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:48.729 [2024-11-20 15:38:34.565462] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:51.266 [2024-11-20 15:38:37.098493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.098552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:51.266 [2024-11-20 15:38:37.098588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2533.025 ms 00:26:51.266 [2024-11-20 15:38:37.098600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.135175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.135221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.266 [2024-11-20 15:38:37.135239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.253 ms 00:26:51.266 [2024-11-20 15:38:37.135250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.135396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.135410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.266 [2024-11-20 15:38:37.135428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:51.266 [2024-11-20 15:38:37.135438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.197166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.197371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.266 [2024-11-20 15:38:37.197403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.683 ms 00:26:51.266 [2024-11-20 15:38:37.197414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.197459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.197475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.266 [2024-11-20 15:38:37.197490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:51.266 [2024-11-20 15:38:37.197500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.198023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.198040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.266 [2024-11-20 15:38:37.198053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:26:51.266 [2024-11-20 15:38:37.198064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.198174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.198187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.266 [2024-11-20 15:38:37.198202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:51.266 [2024-11-20 15:38:37.198213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.266 [2024-11-20 15:38:37.217629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.266 [2024-11-20 15:38:37.217666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.266 [2024-11-20 15:38:37.217684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.395 ms 00:26:51.266 [2024-11-20 15:38:37.217695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.230237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:26:51.525 [2024-11-20 15:38:37.236097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.236276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.525 [2024-11-20 15:38:37.236315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.305 ms 00:26:51.525 [2024-11-20 15:38:37.236328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.307661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.307915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:51.525 [2024-11-20 15:38:37.307959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.292 ms 00:26:51.525 [2024-11-20 15:38:37.307974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.308203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.308229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.525 [2024-11-20 15:38:37.308242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:26:51.525 [2024-11-20 15:38:37.308255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.343949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.343992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:51.525 [2024-11-20 15:38:37.344007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.618 ms 00:26:51.525 [2024-11-20 15:38:37.344019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.378831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.378871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:51.525 [2024-11-20 15:38:37.378901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.774 ms 00:26:51.525 [2024-11-20 15:38:37.378913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.379665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.379687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.525 [2024-11-20 15:38:37.379698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:26:51.525 [2024-11-20 15:38:37.379711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.525 [2024-11-20 15:38:37.477480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.525 [2024-11-20 15:38:37.477539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:51.525 [2024-11-20 15:38:37.477554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.718 ms 00:26:51.525 [2024-11-20 15:38:37.477580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.784 [2024-11-20 15:38:37.515052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.784 [2024-11-20 15:38:37.515100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:51.784 [2024-11-20 15:38:37.515118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.389 ms 00:26:51.784 [2024-11-20 15:38:37.515131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.784 [2024-11-20 15:38:37.551729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.784 [2024-11-20 15:38:37.551773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:51.784 [2024-11-20 15:38:37.551787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.558 ms 00:26:51.785 [2024-11-20 15:38:37.551800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.785 [2024-11-20 15:38:37.587049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.785 [2024-11-20 15:38:37.587093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.785 [2024-11-20 15:38:37.587122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.210 ms 00:26:51.785 [2024-11-20 15:38:37.587135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.785 [2024-11-20 15:38:37.587177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.785 [2024-11-20 15:38:37.587194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.785 [2024-11-20 15:38:37.587206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:51.785 [2024-11-20 15:38:37.587219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.785 [2024-11-20 15:38:37.587316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.785 [2024-11-20 15:38:37.587331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.785 [2024-11-20 15:38:37.587342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:51.785 [2024-11-20 15:38:37.587354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.785 [2024-11-20 15:38:37.588466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3041.465 ms, result 0 00:26:51.785 { 00:26:51.785 "name": "ftl0", 00:26:51.785 "uuid": "9fe7bfcd-a3a6-466a-a83d-ed51d1fb28e7" 00:26:51.785 } 00:26:51.785 15:38:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:26:51.785 15:38:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:26:51.785 15:38:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:26:52.042 15:38:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:26:52.042 [2024-11-20 15:38:37.976872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:52.042 I/O size of 69632 is greater than zero copy threshold (65536). 00:26:52.042 Zero copy mechanism will not be used. 00:26:52.042 Running I/O for 4 seconds... 00:26:54.355 1734.00 IOPS, 115.15 MiB/s [2024-11-20T15:38:41.246Z] 1803.00 IOPS, 119.73 MiB/s [2024-11-20T15:38:42.179Z] 1862.00 IOPS, 123.65 MiB/s [2024-11-20T15:38:42.179Z] 1882.50 IOPS, 125.01 MiB/s 00:26:56.221 Latency(us) 00:26:56.221 [2024-11-20T15:38:42.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.221 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:26:56.221 ftl0 : 4.00 1881.90 124.97 0.00 0.00 556.12 224.30 5180.46 00:26:56.221 [2024-11-20T15:38:42.179Z] =================================================================================================================== 00:26:56.221 [2024-11-20T15:38:42.179Z] Total : 1881.90 124.97 0.00 0.00 556.12 224.30 5180.46 00:26:56.221 [2024-11-20 15:38:41.988315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:56.221 { 00:26:56.221 "results": [ 00:26:56.221 { 00:26:56.221 "job": "ftl0", 00:26:56.221 "core_mask": "0x1", 00:26:56.221 "workload": "randwrite", 00:26:56.221 "status": "finished", 00:26:56.221 "queue_depth": 1, 00:26:56.221 "io_size": 69632, 00:26:56.221 "runtime": 4.001809, 00:26:56.221 "iops": 1881.898911217402, 00:26:56.221 "mibps": 124.9698495730306, 00:26:56.221 "io_failed": 0, 00:26:56.221 "io_timeout": 0, 00:26:56.221 "avg_latency_us": 556.1171245202371, 00:26:56.221 "min_latency_us": 224.3047619047619, 00:26:56.221 "max_latency_us": 5180.464761904762 00:26:56.221 } 00:26:56.221 ], 00:26:56.221 "core_count": 1 00:26:56.221 } 00:26:56.221 15:38:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:26:56.221 [2024-11-20 15:38:42.144925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:56.221 Running I/O for 4 seconds... 00:26:58.619 10741.00 IOPS, 41.96 MiB/s [2024-11-20T15:38:45.158Z] 10564.50 IOPS, 41.27 MiB/s [2024-11-20T15:38:46.533Z] 10395.00 IOPS, 40.61 MiB/s [2024-11-20T15:38:46.533Z] 10330.50 IOPS, 40.35 MiB/s 00:27:00.575 Latency(us) 00:27:00.575 [2024-11-20T15:38:46.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.575 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:00.575 ftl0 : 4.02 10320.19 40.31 0.00 0.00 12377.39 253.56 20347.37 00:27:00.575 [2024-11-20T15:38:46.533Z] =================================================================================================================== 00:27:00.575 [2024-11-20T15:38:46.533Z] Total : 10320.19 40.31 0.00 0.00 12377.39 0.00 20347.37 00:27:00.575 [2024-11-20 15:38:46.171630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:00.575 { 00:27:00.575 "results": [ 00:27:00.575 { 00:27:00.575 "job": "ftl0", 00:27:00.575 "core_mask": "0x1", 00:27:00.575 "workload": "randwrite", 00:27:00.575 "status": "finished", 00:27:00.575 "queue_depth": 128, 00:27:00.575 "io_size": 4096, 00:27:00.575 "runtime": 4.016108, 00:27:00.575 "iops": 10320.190592484067, 00:27:00.575 "mibps": 40.313244501890885, 00:27:00.575 "io_failed": 0, 00:27:00.575 "io_timeout": 0, 00:27:00.575 "avg_latency_us": 12377.386087981553, 00:27:00.575 "min_latency_us": 253.56190476190477, 00:27:00.575 "max_latency_us": 20347.367619047618 00:27:00.575 } 00:27:00.575 ], 00:27:00.575 "core_count": 1 00:27:00.575 } 00:27:00.575 15:38:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:00.575 [2024-11-20 15:38:46.324470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:00.575 Running I/O for 4 seconds... 00:27:02.446 8032.00 IOPS, 31.38 MiB/s [2024-11-20T15:38:49.344Z] 8087.50 IOPS, 31.59 MiB/s [2024-11-20T15:38:50.720Z] 8101.67 IOPS, 31.65 MiB/s [2024-11-20T15:38:50.720Z] 8128.75 IOPS, 31.75 MiB/s 00:27:04.762 Latency(us) 00:27:04.762 [2024-11-20T15:38:50.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.762 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.762 Verification LBA range: start 0x0 length 0x1400000 00:27:04.762 ftl0 : 4.01 8140.26 31.80 0.00 0.00 15674.94 282.82 20097.71 00:27:04.762 [2024-11-20T15:38:50.720Z] =================================================================================================================== 00:27:04.762 [2024-11-20T15:38:50.720Z] Total : 8140.26 31.80 0.00 0.00 15674.94 0.00 20097.71 00:27:04.762 [2024-11-20 15:38:50.354772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:27:04.762 "results": [ 00:27:04.762 { 00:27:04.762 "job": "ftl0", 00:27:04.762 "core_mask": "0x1", 00:27:04.762 "workload": "verify", 00:27:04.762 "status": "finished", 00:27:04.762 "verify_range": { 00:27:04.762 "start": 0, 00:27:04.762 "length": 20971520 00:27:04.762 }, 00:27:04.762 "queue_depth": 128, 00:27:04.762 "io_size": 4096, 00:27:04.762 "runtime": 4.009945, 00:27:04.762 "iops": 8140.2612754040265, 00:27:04.762 "mibps": 31.79789560704698, 00:27:04.762 "io_failed": 0, 00:27:04.762 "io_timeout": 0, 00:27:04.762 "avg_latency_us": 15674.939492152967, 00:27:04.762 "min_latency_us": 282.81904761904764, 00:27:04.762 "max_latency_us": 20097.706666666665 00:27:04.762 } 00:27:04.762 ], 00:27:04.762 "core_count": 1 00:27:04.762 } 00:27:04.762 l0 00:27:04.762 15:38:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:04.762 [2024-11-20 15:38:50.623878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.762 [2024-11-20 15:38:50.624087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:04.762 [2024-11-20 15:38:50.624196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:04.762 [2024-11-20 15:38:50.624239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.762 [2024-11-20 15:38:50.624299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:04.762 [2024-11-20 15:38:50.628600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.762 [2024-11-20 15:38:50.628746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:04.762 [2024-11-20 15:38:50.628788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:27:04.762 [2024-11-20 15:38:50.628799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.762 [2024-11-20 15:38:50.630292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.762 [2024-11-20 15:38:50.630330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:04.762 [2024-11-20 15:38:50.630347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:27:04.762 [2024-11-20 15:38:50.630364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.797000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.797053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.021 [2024-11-20 15:38:50.797081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 166.606 ms 00:27:05.021 [2024-11-20 15:38:50.797094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.802252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.802285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.021 [2024-11-20 15:38:50.802300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:27:05.021 [2024-11-20 15:38:50.802327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.837537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.837587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.021 [2024-11-20 15:38:50.837621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.152 ms 00:27:05.021 [2024-11-20 15:38:50.837631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.858442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.858482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.021 [2024-11-20 15:38:50.858498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.767 ms 00:27:05.021 [2024-11-20 15:38:50.858508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.858689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.858704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:05.021 [2024-11-20 15:38:50.858721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:27:05.021 [2024-11-20 15:38:50.858731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.893924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.893958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:05.021 [2024-11-20 15:38:50.893975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.171 ms 00:27:05.021 [2024-11-20 15:38:50.893984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.928232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.928267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:05.021 [2024-11-20 15:38:50.928282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.207 ms 00:27:05.021 [2024-11-20 15:38:50.928308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.021 [2024-11-20 15:38:50.962362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.021 [2024-11-20 15:38:50.962516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:05.021 [2024-11-20 15:38:50.962583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.012 ms 00:27:05.021 [2024-11-20 15:38:50.962594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.281 [2024-11-20 15:38:50.996314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.281 [2024-11-20 15:38:50.996472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:05.281 [2024-11-20 15:38:50.996500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.625 ms 00:27:05.281 [2024-11-20 15:38:50.996511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.281 [2024-11-20 15:38:50.996589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:05.281 [2024-11-20 15:38:50.996607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.281 [2024-11-20 15:38:50.996719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.996992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:05.282 [2024-11-20 15:38:50.997842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:05.283 [2024-11-20 15:38:50.997852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:05.283 [2024-11-20 15:38:50.997865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:05.283 [2024-11-20 15:38:50.997883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:05.283 [2024-11-20 15:38:50.997895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fe7bfcd-a3a6-466a-a83d-ed51d1fb28e7 00:27:05.283 [2024-11-20 15:38:50.997906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:05.283 [2024-11-20 15:38:50.997922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:05.283 [2024-11-20 15:38:50.997931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:05.283 [2024-11-20 15:38:50.997944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:05.283 [2024-11-20 15:38:50.997953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:05.283 [2024-11-20 15:38:50.997966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:05.283 [2024-11-20 15:38:50.997976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:05.283 [2024-11-20 15:38:50.997990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:05.283 [2024-11-20 15:38:50.998009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:05.283 [2024-11-20 15:38:50.998021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.283 [2024-11-20 15:38:50.998031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:05.283 [2024-11-20 15:38:50.998043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:27:05.283 [2024-11-20 15:38:50.998053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.017789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.283 [2024-11-20 15:38:51.017924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:05.283 [2024-11-20 15:38:51.018022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.685 ms 00:27:05.283 [2024-11-20 15:38:51.018058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.018693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.283 [2024-11-20 15:38:51.018793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:05.283 [2024-11-20 15:38:51.018867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:27:05.283 [2024-11-20 15:38:51.018902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.072451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.283 [2024-11-20 15:38:51.072640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:05.283 [2024-11-20 15:38:51.072743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.283 [2024-11-20 15:38:51.072782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.072860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.283 [2024-11-20 15:38:51.072979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.283 [2024-11-20 15:38:51.073043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.283 [2024-11-20 15:38:51.073074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.073200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.283 [2024-11-20 15:38:51.073303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.283 [2024-11-20 15:38:51.073344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.283 [2024-11-20 15:38:51.073375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.073422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.283 [2024-11-20 15:38:51.073454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.283 [2024-11-20 15:38:51.073560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.283 [2024-11-20 15:38:51.073605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.283 [2024-11-20 15:38:51.195095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.283 [2024-11-20 15:38:51.195339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.283 [2024-11-20 15:38:51.195501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.283 [2024-11-20 15:38:51.195540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.292245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.292458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.544 [2024-11-20 15:38:51.292554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.292613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.292756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.292799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.544 [2024-11-20 15:38:51.292893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.292928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.293022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.293059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.544 [2024-11-20 15:38:51.293093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.293223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.293372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.293410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.544 [2024-11-20 15:38:51.293502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.293632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.293721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.293797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:05.544 [2024-11-20 15:38:51.293837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.293901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.293973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.294007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.544 [2024-11-20 15:38:51.294093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.294128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.544 [2024-11-20 15:38:51.294203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.544 [2024-11-20 15:38:51.294248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.544 [2024-11-20 15:38:51.294349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.544 [2024-11-20 15:38:51.294380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.545 [2024-11-20 15:38:51.294610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 670.650 ms, result 0 00:27:05.545 true 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78036 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78036 ']' 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78036 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78036 00:27:05.545 killing process with pid 78036 00:27:05.545 Received shutdown signal, test time was about 4.000000 seconds 00:27:05.545 00:27:05.545 Latency(us) 00:27:05.545 [2024-11-20T15:38:51.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.545 [2024-11-20T15:38:51.503Z] =================================================================================================================== 00:27:05.545 [2024-11-20T15:38:51.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78036' 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78036 00:27:05.545 15:38:51 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78036 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:09.742 Remove shared memory files 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:09.742 15:38:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:09.742 ************************************ 00:27:09.743 END TEST ftl_bdevperf 00:27:09.743 ************************************ 00:27:09.743 00:27:09.743 real 0m25.621s 00:27:09.743 user 0m28.653s 00:27:09.743 sys 0m1.298s 00:27:09.743 15:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.743 15:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 15:38:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:10.002 15:38:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:10.002 15:38:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.002 15:38:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 ************************************ 00:27:10.002 START TEST ftl_trim 00:27:10.002 ************************************ 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:10.002 * Looking for test storage... 00:27:10.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.002 15:38:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:10.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.002 --rc genhtml_branch_coverage=1 00:27:10.002 --rc genhtml_function_coverage=1 00:27:10.002 --rc genhtml_legend=1 00:27:10.002 --rc geninfo_all_blocks=1 00:27:10.002 --rc geninfo_unexecuted_blocks=1 00:27:10.002 00:27:10.002 ' 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:10.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.002 --rc genhtml_branch_coverage=1 00:27:10.002 --rc genhtml_function_coverage=1 00:27:10.002 --rc genhtml_legend=1 00:27:10.002 --rc geninfo_all_blocks=1 00:27:10.002 --rc geninfo_unexecuted_blocks=1 00:27:10.002 00:27:10.002 ' 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:10.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.002 --rc genhtml_branch_coverage=1 00:27:10.002 --rc genhtml_function_coverage=1 00:27:10.002 --rc genhtml_legend=1 00:27:10.002 --rc geninfo_all_blocks=1 00:27:10.002 --rc geninfo_unexecuted_blocks=1 00:27:10.002 00:27:10.002 ' 00:27:10.002 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:10.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.002 --rc genhtml_branch_coverage=1 00:27:10.002 --rc genhtml_function_coverage=1 00:27:10.002 --rc genhtml_legend=1 00:27:10.002 --rc geninfo_all_blocks=1 00:27:10.002 --rc geninfo_unexecuted_blocks=1 00:27:10.002 00:27:10.002 ' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.002 15:38:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78386 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78386 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78386 ']' 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.261 15:38:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:10.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.261 15:38:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:10.261 [2024-11-20 15:38:56.107236] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:10.261 [2024-11-20 15:38:56.107706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78386 ] 00:27:10.520 [2024-11-20 15:38:56.299741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:10.520 [2024-11-20 15:38:56.417272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.520 [2024-11-20 15:38:56.417364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.520 [2024-11-20 15:38:56.417381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.457 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.457 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:11.457 15:38:57 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:11.716 15:38:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:11.716 15:38:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:11.716 15:38:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:11.716 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:11.716 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:11.716 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:11.716 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:11.716 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:11.975 { 00:27:11.975 "name": "nvme0n1", 00:27:11.975 "aliases": [ 00:27:11.975 "24b2f3e0-ca48-464c-924e-f1a365b668cf" 00:27:11.975 ], 00:27:11.975 "product_name": "NVMe disk", 00:27:11.975 "block_size": 4096, 00:27:11.975 "num_blocks": 1310720, 00:27:11.975 "uuid": "24b2f3e0-ca48-464c-924e-f1a365b668cf", 00:27:11.975 "numa_id": -1, 00:27:11.975 "assigned_rate_limits": { 00:27:11.975 "rw_ios_per_sec": 0, 00:27:11.975 "rw_mbytes_per_sec": 0, 00:27:11.975 "r_mbytes_per_sec": 0, 00:27:11.975 "w_mbytes_per_sec": 0 00:27:11.975 }, 00:27:11.975 "claimed": true, 00:27:11.975 "claim_type": "read_many_write_one", 00:27:11.975 "zoned": false, 00:27:11.975 "supported_io_types": { 00:27:11.975 "read": true, 00:27:11.975 "write": true, 00:27:11.975 "unmap": true, 00:27:11.975 "flush": true, 00:27:11.975 "reset": true, 00:27:11.975 "nvme_admin": true, 00:27:11.975 "nvme_io": true, 00:27:11.975 "nvme_io_md": false, 00:27:11.975 "write_zeroes": true, 00:27:11.975 "zcopy": false, 00:27:11.975 "get_zone_info": false, 00:27:11.975 "zone_management": false, 00:27:11.975 "zone_append": false, 00:27:11.975 "compare": true, 00:27:11.975 "compare_and_write": false, 00:27:11.975 "abort": true, 00:27:11.975 "seek_hole": false, 00:27:11.975 "seek_data": false, 00:27:11.975 "copy": true, 00:27:11.975 "nvme_iov_md": false 00:27:11.975 }, 00:27:11.975 "driver_specific": { 00:27:11.975 "nvme": [ 00:27:11.975 { 00:27:11.975 "pci_address": "0000:00:11.0", 00:27:11.975 "trid": { 00:27:11.975 "trtype": "PCIe", 00:27:11.975 "traddr": "0000:00:11.0" 00:27:11.975 }, 00:27:11.975 "ctrlr_data": { 00:27:11.975 "cntlid": 0, 00:27:11.975 "vendor_id": "0x1b36", 00:27:11.975 "model_number": "QEMU NVMe Ctrl", 00:27:11.975 "serial_number": "12341", 00:27:11.975 "firmware_revision": "8.0.0", 00:27:11.975 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:11.975 "oacs": { 00:27:11.975 "security": 0, 00:27:11.975 "format": 1, 00:27:11.975 "firmware": 0, 00:27:11.975 "ns_manage": 1 00:27:11.975 }, 00:27:11.975 "multi_ctrlr": false, 00:27:11.975 "ana_reporting": false 00:27:11.975 }, 00:27:11.975 "vs": { 00:27:11.975 "nvme_version": "1.4" 00:27:11.975 }, 00:27:11.975 "ns_data": { 00:27:11.975 "id": 1, 00:27:11.975 "can_share": false 00:27:11.975 } 00:27:11.975 } 00:27:11.975 ], 00:27:11.975 "mp_policy": "active_passive" 00:27:11.975 } 00:27:11.975 } 00:27:11.975 ]' 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:11.975 15:38:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:11.975 15:38:57 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:11.975 15:38:57 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:11.975 15:38:57 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:11.975 15:38:57 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:11.975 15:38:57 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:12.235 15:38:58 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ec5260eb-14d4-4539-b7c7-8ccadf72e4d3 00:27:12.235 15:38:58 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:12.235 15:38:58 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec5260eb-14d4-4539-b7c7-8ccadf72e4d3 00:27:12.493 15:38:58 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f4ef6bca-a58c-4507-8a1f-235b505d568d 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f4ef6bca-a58c-4507-8a1f-235b505d568d 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:12.752 15:38:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:12.752 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:12.752 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:12.752 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:12.752 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:12.752 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:13.012 { 00:27:13.012 "name": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:13.012 "aliases": [ 00:27:13.012 "lvs/nvme0n1p0" 00:27:13.012 ], 00:27:13.012 "product_name": "Logical Volume", 00:27:13.012 "block_size": 4096, 00:27:13.012 "num_blocks": 26476544, 00:27:13.012 "uuid": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:13.012 "assigned_rate_limits": { 00:27:13.012 "rw_ios_per_sec": 0, 00:27:13.012 "rw_mbytes_per_sec": 0, 00:27:13.012 "r_mbytes_per_sec": 0, 00:27:13.012 "w_mbytes_per_sec": 0 00:27:13.012 }, 00:27:13.012 "claimed": false, 00:27:13.012 "zoned": false, 00:27:13.012 "supported_io_types": { 00:27:13.012 "read": true, 00:27:13.012 "write": true, 00:27:13.012 "unmap": true, 00:27:13.012 "flush": false, 00:27:13.012 "reset": true, 00:27:13.012 "nvme_admin": false, 00:27:13.012 "nvme_io": false, 00:27:13.012 "nvme_io_md": false, 00:27:13.012 "write_zeroes": true, 00:27:13.012 "zcopy": false, 00:27:13.012 "get_zone_info": false, 00:27:13.012 "zone_management": false, 00:27:13.012 "zone_append": false, 00:27:13.012 "compare": false, 00:27:13.012 "compare_and_write": false, 00:27:13.012 "abort": false, 00:27:13.012 "seek_hole": true, 00:27:13.012 "seek_data": true, 00:27:13.012 "copy": false, 00:27:13.012 "nvme_iov_md": false 00:27:13.012 }, 00:27:13.012 "driver_specific": { 00:27:13.012 "lvol": { 00:27:13.012 "lvol_store_uuid": "f4ef6bca-a58c-4507-8a1f-235b505d568d", 00:27:13.012 "base_bdev": "nvme0n1", 00:27:13.012 "thin_provision": true, 00:27:13.012 "num_allocated_clusters": 0, 00:27:13.012 "snapshot": false, 00:27:13.012 "clone": false, 00:27:13.012 "esnap_clone": false 00:27:13.012 } 00:27:13.012 } 00:27:13.012 } 00:27:13.012 ]' 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:13.012 15:38:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:13.012 15:38:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:13.012 15:38:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:13.271 15:38:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:13.528 15:38:59 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:13.528 15:38:59 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:13.528 15:38:59 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:13.528 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:13.528 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:13.528 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:13.528 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:13.528 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:13.786 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:13.786 { 00:27:13.786 "name": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:13.786 "aliases": [ 00:27:13.786 "lvs/nvme0n1p0" 00:27:13.786 ], 00:27:13.786 "product_name": "Logical Volume", 00:27:13.786 "block_size": 4096, 00:27:13.786 "num_blocks": 26476544, 00:27:13.787 "uuid": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:13.787 "assigned_rate_limits": { 00:27:13.787 "rw_ios_per_sec": 0, 00:27:13.787 "rw_mbytes_per_sec": 0, 00:27:13.787 "r_mbytes_per_sec": 0, 00:27:13.787 "w_mbytes_per_sec": 0 00:27:13.787 }, 00:27:13.787 "claimed": false, 00:27:13.787 "zoned": false, 00:27:13.787 "supported_io_types": { 00:27:13.787 "read": true, 00:27:13.787 "write": true, 00:27:13.787 "unmap": true, 00:27:13.787 "flush": false, 00:27:13.787 "reset": true, 00:27:13.787 "nvme_admin": false, 00:27:13.787 "nvme_io": false, 00:27:13.787 "nvme_io_md": false, 00:27:13.787 "write_zeroes": true, 00:27:13.787 "zcopy": false, 00:27:13.787 "get_zone_info": false, 00:27:13.787 "zone_management": false, 00:27:13.787 "zone_append": false, 00:27:13.787 "compare": false, 00:27:13.787 "compare_and_write": false, 00:27:13.787 "abort": false, 00:27:13.787 "seek_hole": true, 00:27:13.787 "seek_data": true, 00:27:13.787 "copy": false, 00:27:13.787 "nvme_iov_md": false 00:27:13.787 }, 00:27:13.787 "driver_specific": { 00:27:13.787 "lvol": { 00:27:13.787 "lvol_store_uuid": "f4ef6bca-a58c-4507-8a1f-235b505d568d", 00:27:13.787 "base_bdev": "nvme0n1", 00:27:13.787 "thin_provision": true, 00:27:13.787 "num_allocated_clusters": 0, 00:27:13.787 "snapshot": false, 00:27:13.787 "clone": false, 00:27:13.787 "esnap_clone": false 00:27:13.787 } 00:27:13.787 } 00:27:13.787 } 00:27:13.787 ]' 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:13.787 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:13.787 15:38:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:13.787 15:38:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:14.047 15:38:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:14.047 15:38:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:14.047 15:38:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:14.047 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:14.047 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:14.047 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:14.047 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:14.047 15:38:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 021127f4-17d1-4ad0-83c1-e6c1484b25b9 00:27:14.306 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:14.306 { 00:27:14.306 "name": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:14.306 "aliases": [ 00:27:14.306 "lvs/nvme0n1p0" 00:27:14.306 ], 00:27:14.306 "product_name": "Logical Volume", 00:27:14.306 "block_size": 4096, 00:27:14.306 "num_blocks": 26476544, 00:27:14.306 "uuid": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:14.306 "assigned_rate_limits": { 00:27:14.306 "rw_ios_per_sec": 0, 00:27:14.306 "rw_mbytes_per_sec": 0, 00:27:14.306 "r_mbytes_per_sec": 0, 00:27:14.306 "w_mbytes_per_sec": 0 00:27:14.306 }, 00:27:14.306 "claimed": false, 00:27:14.306 "zoned": false, 00:27:14.306 "supported_io_types": { 00:27:14.306 "read": true, 00:27:14.306 "write": true, 00:27:14.306 "unmap": true, 00:27:14.306 "flush": false, 00:27:14.306 "reset": true, 00:27:14.306 "nvme_admin": false, 00:27:14.306 "nvme_io": false, 00:27:14.306 "nvme_io_md": false, 00:27:14.306 "write_zeroes": true, 00:27:14.306 "zcopy": false, 00:27:14.306 "get_zone_info": false, 00:27:14.306 "zone_management": false, 00:27:14.306 "zone_append": false, 00:27:14.306 "compare": false, 00:27:14.306 "compare_and_write": false, 00:27:14.306 "abort": false, 00:27:14.306 "seek_hole": true, 00:27:14.306 "seek_data": true, 00:27:14.306 "copy": false, 00:27:14.306 "nvme_iov_md": false 00:27:14.306 }, 00:27:14.306 "driver_specific": { 00:27:14.307 "lvol": { 00:27:14.307 "lvol_store_uuid": "f4ef6bca-a58c-4507-8a1f-235b505d568d", 00:27:14.307 "base_bdev": "nvme0n1", 00:27:14.307 "thin_provision": true, 00:27:14.307 "num_allocated_clusters": 0, 00:27:14.307 "snapshot": false, 00:27:14.307 "clone": false, 00:27:14.307 "esnap_clone": false 00:27:14.307 } 00:27:14.307 } 00:27:14.307 } 00:27:14.307 ]' 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:14.307 15:39:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:14.307 15:39:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:14.307 15:39:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 021127f4-17d1-4ad0-83c1-e6c1484b25b9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:14.567 [2024-11-20 15:39:00.339396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.339448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.567 [2024-11-20 15:39:00.339485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:14.567 [2024-11-20 15:39:00.339496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.342955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.343131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.567 [2024-11-20 15:39:00.343159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:27:14.567 [2024-11-20 15:39:00.343170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.343366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.567 [2024-11-20 15:39:00.344327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.567 [2024-11-20 15:39:00.344366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.344378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.567 [2024-11-20 15:39:00.344391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:27:14.567 [2024-11-20 15:39:00.344402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.344511] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:14.567 [2024-11-20 15:39:00.345925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.345961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:14.567 [2024-11-20 15:39:00.345974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:14.567 [2024-11-20 15:39:00.345987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.353541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.353746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.567 [2024-11-20 15:39:00.353771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.477 ms 00:27:14.567 [2024-11-20 15:39:00.353787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.353954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.353972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.567 [2024-11-20 15:39:00.353984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:14.567 [2024-11-20 15:39:00.354001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.354049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.354063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.567 [2024-11-20 15:39:00.354073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:14.567 [2024-11-20 15:39:00.354089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.354128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:14.567 [2024-11-20 15:39:00.358913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.358946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.567 [2024-11-20 15:39:00.358963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:27:14.567 [2024-11-20 15:39:00.358989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.359057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.567 [2024-11-20 15:39:00.359069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.567 [2024-11-20 15:39:00.359082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:14.567 [2024-11-20 15:39:00.359108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.567 [2024-11-20 15:39:00.359145] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:14.567 [2024-11-20 15:39:00.359273] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.567 [2024-11-20 15:39:00.359293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.567 [2024-11-20 15:39:00.359306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:14.567 [2024-11-20 15:39:00.359322] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.567 [2024-11-20 15:39:00.359335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:14.568 [2024-11-20 15:39:00.359358] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.568 [2024-11-20 15:39:00.359371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.568 [2024-11-20 15:39:00.359392] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.568 [2024-11-20 15:39:00.359406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.568 [2024-11-20 15:39:00.359416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.568 [2024-11-20 15:39:00.359429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:14.568 [2024-11-20 15:39:00.359439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.568 [2024-11-20 15:39:00.359532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.568 [2024-11-20 15:39:00.359543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.568 [2024-11-20 15:39:00.359557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:14.568 [2024-11-20 15:39:00.359583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.568 [2024-11-20 15:39:00.359701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.568 [2024-11-20 15:39:00.359713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.568 [2024-11-20 15:39:00.359726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.568 [2024-11-20 15:39:00.359760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.568 [2024-11-20 15:39:00.359795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.568 [2024-11-20 15:39:00.359816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.568 [2024-11-20 15:39:00.359826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:14.568 [2024-11-20 15:39:00.359838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.568 [2024-11-20 15:39:00.359847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.568 [2024-11-20 15:39:00.359859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:14.568 [2024-11-20 15:39:00.359868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.568 [2024-11-20 15:39:00.359892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.568 [2024-11-20 15:39:00.359926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.568 [2024-11-20 15:39:00.359957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.568 [2024-11-20 15:39:00.359978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.568 [2024-11-20 15:39:00.359989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:14.568 [2024-11-20 15:39:00.359998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.568 [2024-11-20 15:39:00.360010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.568 [2024-11-20 15:39:00.360020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:14.568 [2024-11-20 15:39:00.360031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.568 [2024-11-20 15:39:00.360040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.568 [2024-11-20 15:39:00.360054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:14.568 [2024-11-20 15:39:00.360063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.568 [2024-11-20 15:39:00.360075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.568 [2024-11-20 15:39:00.360084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:14.568 [2024-11-20 15:39:00.360096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.568 [2024-11-20 15:39:00.360105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.568 [2024-11-20 15:39:00.360117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:14.568 [2024-11-20 15:39:00.360127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.360142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.568 [2024-11-20 15:39:00.360152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:14.568 [2024-11-20 15:39:00.360163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.360172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.568 [2024-11-20 15:39:00.360185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.568 [2024-11-20 15:39:00.360195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.568 [2024-11-20 15:39:00.360207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.568 [2024-11-20 15:39:00.360217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.568 [2024-11-20 15:39:00.360233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.568 [2024-11-20 15:39:00.360242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.568 [2024-11-20 15:39:00.360254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.568 [2024-11-20 15:39:00.360263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.568 [2024-11-20 15:39:00.360276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.568 [2024-11-20 15:39:00.360289] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.568 [2024-11-20 15:39:00.360304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:14.568 [2024-11-20 15:39:00.360332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:14.568 [2024-11-20 15:39:00.360343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:14.568 [2024-11-20 15:39:00.360355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:14.568 [2024-11-20 15:39:00.360366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:14.568 [2024-11-20 15:39:00.360378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:14.568 [2024-11-20 15:39:00.360388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:14.568 [2024-11-20 15:39:00.360401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:14.568 [2024-11-20 15:39:00.360411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:14.568 [2024-11-20 15:39:00.360426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:14.568 [2024-11-20 15:39:00.360482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.568 [2024-11-20 15:39:00.360501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.568 [2024-11-20 15:39:00.360526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.568 [2024-11-20 15:39:00.360537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.568 [2024-11-20 15:39:00.360549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.568 [2024-11-20 15:39:00.360560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.568 [2024-11-20 15:39:00.360583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.568 [2024-11-20 15:39:00.360594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:27:14.568 [2024-11-20 15:39:00.360607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.568 [2024-11-20 15:39:00.360686] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:14.568 [2024-11-20 15:39:00.360704] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:17.106 [2024-11-20 15:39:02.887042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.106 [2024-11-20 15:39:02.887320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:17.107 [2024-11-20 15:39:02.887347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2526.338 ms 00:27:17.107 [2024-11-20 15:39:02.887362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.926600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.926650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:17.107 [2024-11-20 15:39:02.926667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.883 ms 00:27:17.107 [2024-11-20 15:39:02.926680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.926848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.926881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:17.107 [2024-11-20 15:39:02.926893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:17.107 [2024-11-20 15:39:02.926909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.989997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.990046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:17.107 [2024-11-20 15:39:02.990078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.023 ms 00:27:17.107 [2024-11-20 15:39:02.990093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.990196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.990213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:17.107 [2024-11-20 15:39:02.990224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:17.107 [2024-11-20 15:39:02.990237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.990703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.990724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:17.107 [2024-11-20 15:39:02.990736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:27:17.107 [2024-11-20 15:39:02.990748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:02.990863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:02.990881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:17.107 [2024-11-20 15:39:02.990893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:17.107 [2024-11-20 15:39:02.990908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:03.012068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:03.012114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:17.107 [2024-11-20 15:39:03.012146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.107 ms 00:27:17.107 [2024-11-20 15:39:03.012159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.107 [2024-11-20 15:39:03.025134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:17.107 [2024-11-20 15:39:03.041848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.107 [2024-11-20 15:39:03.041905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:17.107 [2024-11-20 15:39:03.041924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.544 ms 00:27:17.107 [2024-11-20 15:39:03.041935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-20 15:39:03.118658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.118718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:17.367 [2024-11-20 15:39:03.118738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.576 ms 00:27:17.367 [2024-11-20 15:39:03.118749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-20 15:39:03.118995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.119009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:17.367 [2024-11-20 15:39:03.119027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:27:17.367 [2024-11-20 15:39:03.119037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-20 15:39:03.155119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.155160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:17.367 [2024-11-20 15:39:03.155194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.044 ms 00:27:17.367 [2024-11-20 15:39:03.155205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-20 15:39:03.191844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.191883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:17.367 [2024-11-20 15:39:03.191902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.527 ms 00:27:17.367 [2024-11-20 15:39:03.191911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-20 15:39:03.192685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.192707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:17.367 [2024-11-20 15:39:03.192722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:27:17.367 [2024-11-20 15:39:03.192732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-20 15:39:03.297542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-20 15:39:03.297605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:17.367 [2024-11-20 15:39:03.297628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.761 ms 00:27:17.367 [2024-11-20 15:39:03.297639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.336310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.683 [2024-11-20 15:39:03.336371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:17.683 [2024-11-20 15:39:03.336390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.547 ms 00:27:17.683 [2024-11-20 15:39:03.336401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.374291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.683 [2024-11-20 15:39:03.374462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:17.683 [2024-11-20 15:39:03.374489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.790 ms 00:27:17.683 [2024-11-20 15:39:03.374500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.411611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.683 [2024-11-20 15:39:03.411767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:17.683 [2024-11-20 15:39:03.411794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.871 ms 00:27:17.683 [2024-11-20 15:39:03.411819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.411911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.683 [2024-11-20 15:39:03.411928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:17.683 [2024-11-20 15:39:03.411945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:17.683 [2024-11-20 15:39:03.411955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.412037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.683 [2024-11-20 15:39:03.412049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:17.683 [2024-11-20 15:39:03.412062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:17.683 [2024-11-20 15:39:03.412073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.683 [2024-11-20 15:39:03.413283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:17.683 [2024-11-20 15:39:03.418011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3073.526 ms, result 0 00:27:17.683 [2024-11-20 15:39:03.418973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:17.683 { 00:27:17.683 "name": "ftl0", 00:27:17.683 "uuid": "3adbce50-96c5-4eda-b128-33d3af6d2f46" 00:27:17.683 } 00:27:17.683 15:39:03 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:17.683 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:17.957 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:18.216 [ 00:27:18.216 { 00:27:18.216 "name": "ftl0", 00:27:18.216 "aliases": [ 00:27:18.216 "3adbce50-96c5-4eda-b128-33d3af6d2f46" 00:27:18.216 ], 00:27:18.216 "product_name": "FTL disk", 00:27:18.216 "block_size": 4096, 00:27:18.216 "num_blocks": 23592960, 00:27:18.216 "uuid": "3adbce50-96c5-4eda-b128-33d3af6d2f46", 00:27:18.216 "assigned_rate_limits": { 00:27:18.216 "rw_ios_per_sec": 0, 00:27:18.216 "rw_mbytes_per_sec": 0, 00:27:18.216 "r_mbytes_per_sec": 0, 00:27:18.216 "w_mbytes_per_sec": 0 00:27:18.216 }, 00:27:18.216 "claimed": false, 00:27:18.216 "zoned": false, 00:27:18.216 "supported_io_types": { 00:27:18.216 "read": true, 00:27:18.216 "write": true, 00:27:18.216 "unmap": true, 00:27:18.216 "flush": true, 00:27:18.216 "reset": false, 00:27:18.216 "nvme_admin": false, 00:27:18.216 "nvme_io": false, 00:27:18.216 "nvme_io_md": false, 00:27:18.216 "write_zeroes": true, 00:27:18.216 "zcopy": false, 00:27:18.216 "get_zone_info": false, 00:27:18.216 "zone_management": false, 00:27:18.216 "zone_append": false, 00:27:18.216 "compare": false, 00:27:18.216 "compare_and_write": false, 00:27:18.216 "abort": false, 00:27:18.216 "seek_hole": false, 00:27:18.216 "seek_data": false, 00:27:18.216 "copy": false, 00:27:18.216 "nvme_iov_md": false 00:27:18.216 }, 00:27:18.216 "driver_specific": { 00:27:18.216 "ftl": { 00:27:18.216 "base_bdev": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:18.216 "cache": "nvc0n1p0" 00:27:18.216 } 00:27:18.216 } 00:27:18.216 } 00:27:18.216 ] 00:27:18.217 15:39:03 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:18.217 15:39:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:18.217 15:39:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:18.217 15:39:04 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:18.217 15:39:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:18.476 15:39:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:18.476 { 00:27:18.476 "name": "ftl0", 00:27:18.476 "aliases": [ 00:27:18.476 "3adbce50-96c5-4eda-b128-33d3af6d2f46" 00:27:18.476 ], 00:27:18.476 "product_name": "FTL disk", 00:27:18.476 "block_size": 4096, 00:27:18.476 "num_blocks": 23592960, 00:27:18.476 "uuid": "3adbce50-96c5-4eda-b128-33d3af6d2f46", 00:27:18.476 "assigned_rate_limits": { 00:27:18.476 "rw_ios_per_sec": 0, 00:27:18.476 "rw_mbytes_per_sec": 0, 00:27:18.476 "r_mbytes_per_sec": 0, 00:27:18.476 "w_mbytes_per_sec": 0 00:27:18.476 }, 00:27:18.476 "claimed": false, 00:27:18.476 "zoned": false, 00:27:18.476 "supported_io_types": { 00:27:18.476 "read": true, 00:27:18.476 "write": true, 00:27:18.476 "unmap": true, 00:27:18.476 "flush": true, 00:27:18.476 "reset": false, 00:27:18.476 "nvme_admin": false, 00:27:18.476 "nvme_io": false, 00:27:18.476 "nvme_io_md": false, 00:27:18.476 "write_zeroes": true, 00:27:18.476 "zcopy": false, 00:27:18.476 "get_zone_info": false, 00:27:18.476 "zone_management": false, 00:27:18.476 "zone_append": false, 00:27:18.476 "compare": false, 00:27:18.476 "compare_and_write": false, 00:27:18.476 "abort": false, 00:27:18.476 "seek_hole": false, 00:27:18.476 "seek_data": false, 00:27:18.476 "copy": false, 00:27:18.476 "nvme_iov_md": false 00:27:18.476 }, 00:27:18.476 "driver_specific": { 00:27:18.476 "ftl": { 00:27:18.476 "base_bdev": "021127f4-17d1-4ad0-83c1-e6c1484b25b9", 00:27:18.476 "cache": "nvc0n1p0" 00:27:18.476 } 00:27:18.476 } 00:27:18.476 } 00:27:18.476 ]' 00:27:18.476 15:39:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:18.476 15:39:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:18.476 15:39:04 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:18.735 [2024-11-20 15:39:04.688925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.735 [2024-11-20 15:39:04.688978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:18.735 [2024-11-20 15:39:04.688997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.735 [2024-11-20 15:39:04.689013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.735 [2024-11-20 15:39:04.689051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:18.996 [2024-11-20 15:39:04.693362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.693407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:18.996 [2024-11-20 15:39:04.693430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:27:18.996 [2024-11-20 15:39:04.693441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.694003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.694021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:18.996 [2024-11-20 15:39:04.694035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:27:18.996 [2024-11-20 15:39:04.694046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.696956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.696982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:18.996 [2024-11-20 15:39:04.696996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:27:18.996 [2024-11-20 15:39:04.697006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.702743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.702777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:18.996 [2024-11-20 15:39:04.702792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.682 ms 00:27:18.996 [2024-11-20 15:39:04.702802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.742527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.742593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:18.996 [2024-11-20 15:39:04.742616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.632 ms 00:27:18.996 [2024-11-20 15:39:04.742627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.765165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.765207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:18.996 [2024-11-20 15:39:04.765225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.433 ms 00:27:18.996 [2024-11-20 15:39:04.765239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.765462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.765477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:18.996 [2024-11-20 15:39:04.765491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:27:18.996 [2024-11-20 15:39:04.765501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.803044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.803100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:18.996 [2024-11-20 15:39:04.803119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.508 ms 00:27:18.996 [2024-11-20 15:39:04.803129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.840867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.840906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:18.996 [2024-11-20 15:39:04.841053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.638 ms 00:27:18.996 [2024-11-20 15:39:04.841064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.878301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.878459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:18.996 [2024-11-20 15:39:04.878486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.139 ms 00:27:18.996 [2024-11-20 15:39:04.878497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.915641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.996 [2024-11-20 15:39:04.915818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:18.996 [2024-11-20 15:39:04.915845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.967 ms 00:27:18.996 [2024-11-20 15:39:04.915856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.996 [2024-11-20 15:39:04.915946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:18.996 [2024-11-20 15:39:04.915964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.915979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.915991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:18.996 [2024-11-20 15:39:04.916355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.916991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:18.997 [2024-11-20 15:39:04.917237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:18.997 [2024-11-20 15:39:04.917252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:18.997 [2024-11-20 15:39:04.917263] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:18.997 [2024-11-20 15:39:04.917276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:18.997 [2024-11-20 15:39:04.917286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:18.997 [2024-11-20 15:39:04.917302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:18.997 [2024-11-20 15:39:04.917312] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:18.997 [2024-11-20 15:39:04.917326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:18.997 [2024-11-20 15:39:04.917336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:18.997 [2024-11-20 15:39:04.917348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:18.997 [2024-11-20 15:39:04.917357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:18.997 [2024-11-20 15:39:04.917369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.997 [2024-11-20 15:39:04.917380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:18.997 [2024-11-20 15:39:04.917395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:27:18.997 [2024-11-20 15:39:04.917405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.997 [2024-11-20 15:39:04.938195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.997 [2024-11-20 15:39:04.938233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:18.997 [2024-11-20 15:39:04.938251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.752 ms 00:27:18.997 [2024-11-20 15:39:04.938261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.997 [2024-11-20 15:39:04.938935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.997 [2024-11-20 15:39:04.938955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:18.997 [2024-11-20 15:39:04.938969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:27:18.997 [2024-11-20 15:39:04.938979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.257 [2024-11-20 15:39:05.010307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.257 [2024-11-20 15:39:05.010358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.257 [2024-11-20 15:39:05.010375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.257 [2024-11-20 15:39:05.010386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.257 [2024-11-20 15:39:05.010529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.257 [2024-11-20 15:39:05.010550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.257 [2024-11-20 15:39:05.010564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.257 [2024-11-20 15:39:05.010595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.257 [2024-11-20 15:39:05.010678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.257 [2024-11-20 15:39:05.010691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.257 [2024-11-20 15:39:05.010711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.257 [2024-11-20 15:39:05.010722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.257 [2024-11-20 15:39:05.010755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.257 [2024-11-20 15:39:05.010768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.257 [2024-11-20 15:39:05.010781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.257 [2024-11-20 15:39:05.010791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.257 [2024-11-20 15:39:05.145461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.257 [2024-11-20 15:39:05.145521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.257 [2024-11-20 15:39:05.145554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.257 [2024-11-20 15:39:05.145564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.248477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.248537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.517 [2024-11-20 15:39:05.248555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.248586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.248749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.248762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.517 [2024-11-20 15:39:05.248794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.248808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.248869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.248880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.517 [2024-11-20 15:39:05.248892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.248902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.249035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.249049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.517 [2024-11-20 15:39:05.249062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.249074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.249133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.249145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:19.517 [2024-11-20 15:39:05.249158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.249168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.249230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.249241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.517 [2024-11-20 15:39:05.249256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.249267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.249335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.517 [2024-11-20 15:39:05.249347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.517 [2024-11-20 15:39:05.249360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.517 [2024-11-20 15:39:05.249370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.517 [2024-11-20 15:39:05.249563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.620 ms, result 0 00:27:19.517 true 00:27:19.517 15:39:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78386 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78386 ']' 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78386 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78386 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.517 killing process with pid 78386 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78386' 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78386 00:27:19.517 15:39:05 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78386 00:27:26.087 15:39:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:27.025 65536+0 records in 00:27:27.025 65536+0 records out 00:27:27.025 268435456 bytes (268 MB, 256 MiB) copied, 1.02146 s, 263 MB/s 00:27:27.025 15:39:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:27.025 [2024-11-20 15:39:12.968807] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:27.025 [2024-11-20 15:39:12.968943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78580 ] 00:27:27.285 [2024-11-20 15:39:13.152685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.544 [2024-11-20 15:39:13.313321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.803 [2024-11-20 15:39:13.661441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.803 [2024-11-20 15:39:13.661503] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.066 [2024-11-20 15:39:13.824044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.824094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.066 [2024-11-20 15:39:13.824109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.066 [2024-11-20 15:39:13.824136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.827252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.827293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.066 [2024-11-20 15:39:13.827306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:27:28.066 [2024-11-20 15:39:13.827332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.827429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.066 [2024-11-20 15:39:13.828370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.066 [2024-11-20 15:39:13.828405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.828416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.066 [2024-11-20 15:39:13.828427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:27:28.066 [2024-11-20 15:39:13.828437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.829986] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:28.066 [2024-11-20 15:39:13.848866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.848909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:28.066 [2024-11-20 15:39:13.848922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.881 ms 00:27:28.066 [2024-11-20 15:39:13.848949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.849049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.849064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:28.066 [2024-11-20 15:39:13.849075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:28.066 [2024-11-20 15:39:13.849084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.855758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.855924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.066 [2024-11-20 15:39:13.855945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.632 ms 00:27:28.066 [2024-11-20 15:39:13.855955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.856062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.856076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.066 [2024-11-20 15:39:13.856087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:28.066 [2024-11-20 15:39:13.856097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.856126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.856141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.066 [2024-11-20 15:39:13.856151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:28.066 [2024-11-20 15:39:13.856161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.856185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:28.066 [2024-11-20 15:39:13.860971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.861006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.066 [2024-11-20 15:39:13.861018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:27:28.066 [2024-11-20 15:39:13.861044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.861110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.861122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.066 [2024-11-20 15:39:13.861133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:28.066 [2024-11-20 15:39:13.861144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.861163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:28.066 [2024-11-20 15:39:13.861189] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:28.066 [2024-11-20 15:39:13.861224] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:28.066 [2024-11-20 15:39:13.861241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:28.066 [2024-11-20 15:39:13.861338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.066 [2024-11-20 15:39:13.861351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.066 [2024-11-20 15:39:13.861364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:28.066 [2024-11-20 15:39:13.861376] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.066 [2024-11-20 15:39:13.861392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.066 [2024-11-20 15:39:13.861403] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:28.066 [2024-11-20 15:39:13.861413] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.066 [2024-11-20 15:39:13.861422] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.066 [2024-11-20 15:39:13.861432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.066 [2024-11-20 15:39:13.861443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.861453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.066 [2024-11-20 15:39:13.861463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:28.066 [2024-11-20 15:39:13.861473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.861548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.066 [2024-11-20 15:39:13.861562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.066 [2024-11-20 15:39:13.861592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:28.066 [2024-11-20 15:39:13.861602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.066 [2024-11-20 15:39:13.861692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.066 [2024-11-20 15:39:13.861721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.066 [2024-11-20 15:39:13.861732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.066 [2024-11-20 15:39:13.861743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.066 [2024-11-20 15:39:13.861754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.066 [2024-11-20 15:39:13.861776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.066 [2024-11-20 15:39:13.861785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:28.066 [2024-11-20 15:39:13.861796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.067 [2024-11-20 15:39:13.861805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:28.067 [2024-11-20 15:39:13.861822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.067 [2024-11-20 15:39:13.861833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.067 [2024-11-20 15:39:13.861842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:28.067 [2024-11-20 15:39:13.861852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.067 [2024-11-20 15:39:13.861871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.067 [2024-11-20 15:39:13.861881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:28.067 [2024-11-20 15:39:13.861890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.067 [2024-11-20 15:39:13.861900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.067 [2024-11-20 15:39:13.861909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:28.067 [2024-11-20 15:39:13.861918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.067 [2024-11-20 15:39:13.861927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.067 [2024-11-20 15:39:13.861937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:28.067 [2024-11-20 15:39:13.861946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.067 [2024-11-20 15:39:13.861955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.067 [2024-11-20 15:39:13.861964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:28.067 [2024-11-20 15:39:13.861973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.067 [2024-11-20 15:39:13.861982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.067 [2024-11-20 15:39:13.861992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.067 [2024-11-20 15:39:13.862009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.067 [2024-11-20 15:39:13.862019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.067 [2024-11-20 15:39:13.862037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.067 [2024-11-20 15:39:13.862062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.067 [2024-11-20 15:39:13.862080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.067 [2024-11-20 15:39:13.862089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:28.067 [2024-11-20 15:39:13.862098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.067 [2024-11-20 15:39:13.862107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.067 [2024-11-20 15:39:13.862117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:28.067 [2024-11-20 15:39:13.862126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.067 [2024-11-20 15:39:13.862144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:28.067 [2024-11-20 15:39:13.862154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862163] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.067 [2024-11-20 15:39:13.862173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.067 [2024-11-20 15:39:13.862183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.067 [2024-11-20 15:39:13.862197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.067 [2024-11-20 15:39:13.862207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.067 [2024-11-20 15:39:13.862217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.067 [2024-11-20 15:39:13.862227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.067 [2024-11-20 15:39:13.862236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.067 [2024-11-20 15:39:13.862246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.067 [2024-11-20 15:39:13.862256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.067 [2024-11-20 15:39:13.862268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.067 [2024-11-20 15:39:13.862280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:28.067 [2024-11-20 15:39:13.862302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:28.067 [2024-11-20 15:39:13.862313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:28.067 [2024-11-20 15:39:13.862323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:28.067 [2024-11-20 15:39:13.862334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:28.067 [2024-11-20 15:39:13.862344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:28.067 [2024-11-20 15:39:13.862355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:28.067 [2024-11-20 15:39:13.862366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:28.067 [2024-11-20 15:39:13.862377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:28.067 [2024-11-20 15:39:13.862387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:28.067 [2024-11-20 15:39:13.862439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.067 [2024-11-20 15:39:13.862450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.067 [2024-11-20 15:39:13.862472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.067 [2024-11-20 15:39:13.862482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.067 [2024-11-20 15:39:13.862493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.067 [2024-11-20 15:39:13.862504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.862515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.067 [2024-11-20 15:39:13.862529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:27:28.067 [2024-11-20 15:39:13.862539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.901483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.901522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.067 [2024-11-20 15:39:13.901537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.869 ms 00:27:28.067 [2024-11-20 15:39:13.901547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.901743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.901762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:28.067 [2024-11-20 15:39:13.901774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:28.067 [2024-11-20 15:39:13.901784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.959016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.959055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.067 [2024-11-20 15:39:13.959070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.207 ms 00:27:28.067 [2024-11-20 15:39:13.959100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.959199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.959214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.067 [2024-11-20 15:39:13.959226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:28.067 [2024-11-20 15:39:13.959236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.959707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.959722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.067 [2024-11-20 15:39:13.959733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:27:28.067 [2024-11-20 15:39:13.959749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.067 [2024-11-20 15:39:13.959883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.067 [2024-11-20 15:39:13.959897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.067 [2024-11-20 15:39:13.959908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:28.067 [2024-11-20 15:39:13.959918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.068 [2024-11-20 15:39:13.979229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.068 [2024-11-20 15:39:13.979266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.068 [2024-11-20 15:39:13.979279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.287 ms 00:27:28.068 [2024-11-20 15:39:13.979290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.068 [2024-11-20 15:39:13.997693] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:28.068 [2024-11-20 15:39:13.997732] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:28.068 [2024-11-20 15:39:13.997748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.068 [2024-11-20 15:39:13.997758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:28.068 [2024-11-20 15:39:13.997769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.348 ms 00:27:28.068 [2024-11-20 15:39:13.997779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.327 [2024-11-20 15:39:14.026511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.327 [2024-11-20 15:39:14.026556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:28.327 [2024-11-20 15:39:14.026598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.654 ms 00:27:28.327 [2024-11-20 15:39:14.026625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.327 [2024-11-20 15:39:14.044230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.327 [2024-11-20 15:39:14.044389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:28.327 [2024-11-20 15:39:14.044409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.523 ms 00:27:28.327 [2024-11-20 15:39:14.044420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.061766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.061911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:28.328 [2024-11-20 15:39:14.061930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.268 ms 00:27:28.328 [2024-11-20 15:39:14.061940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.062747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.062774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:28.328 [2024-11-20 15:39:14.062787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:27:28.328 [2024-11-20 15:39:14.062797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.150135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.150206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:28.328 [2024-11-20 15:39:14.150224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.306 ms 00:27:28.328 [2024-11-20 15:39:14.150234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.160892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:28.328 [2024-11-20 15:39:14.177195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.177250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:28.328 [2024-11-20 15:39:14.177265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.843 ms 00:27:28.328 [2024-11-20 15:39:14.177291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.177429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.177447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:28.328 [2024-11-20 15:39:14.177458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.328 [2024-11-20 15:39:14.177468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.177524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.177535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:28.328 [2024-11-20 15:39:14.177546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:28.328 [2024-11-20 15:39:14.177556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.177616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.177631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:28.328 [2024-11-20 15:39:14.177660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:28.328 [2024-11-20 15:39:14.177671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.177710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:28.328 [2024-11-20 15:39:14.177722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.177732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:28.328 [2024-11-20 15:39:14.177743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:28.328 [2024-11-20 15:39:14.177754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.214385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.214428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:28.328 [2024-11-20 15:39:14.214442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.608 ms 00:27:28.328 [2024-11-20 15:39:14.214468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.214610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.328 [2024-11-20 15:39:14.214643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:28.328 [2024-11-20 15:39:14.214655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:28.328 [2024-11-20 15:39:14.214665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.328 [2024-11-20 15:39:14.215645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.328 [2024-11-20 15:39:14.220156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.260 ms, result 0 00:27:28.328 [2024-11-20 15:39:14.220999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:28.328 [2024-11-20 15:39:14.239226] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:29.708  [2024-11-20T15:39:16.604Z] Copying: 28/256 [MB] (28 MBps) [2024-11-20T15:39:17.541Z] Copying: 57/256 [MB] (28 MBps) [2024-11-20T15:39:18.478Z] Copying: 86/256 [MB] (28 MBps) [2024-11-20T15:39:19.419Z] Copying: 114/256 [MB] (28 MBps) [2024-11-20T15:39:20.361Z] Copying: 143/256 [MB] (29 MBps) [2024-11-20T15:39:21.297Z] Copying: 173/256 [MB] (29 MBps) [2024-11-20T15:39:22.674Z] Copying: 202/256 [MB] (29 MBps) [2024-11-20T15:39:23.239Z] Copying: 232/256 [MB] (29 MBps) [2024-11-20T15:39:23.239Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 15:39:23.055205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:37.281 [2024-11-20 15:39:23.070095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.281 [2024-11-20 15:39:23.070142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:37.281 [2024-11-20 15:39:23.070157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:37.281 [2024-11-20 15:39:23.070168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.281 [2024-11-20 15:39:23.070206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:37.281 [2024-11-20 15:39:23.074452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.281 [2024-11-20 15:39:23.074486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:37.281 [2024-11-20 15:39:23.074498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.228 ms 00:27:37.281 [2024-11-20 15:39:23.074508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.076350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.076394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:37.282 [2024-11-20 15:39:23.076411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:27:37.282 [2024-11-20 15:39:23.076424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.082963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.083001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:37.282 [2024-11-20 15:39:23.083021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.516 ms 00:27:37.282 [2024-11-20 15:39:23.083032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.088854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.088890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:37.282 [2024-11-20 15:39:23.088902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:27:37.282 [2024-11-20 15:39:23.088929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.124641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.124678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:37.282 [2024-11-20 15:39:23.124691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.662 ms 00:27:37.282 [2024-11-20 15:39:23.124717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.145478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.145693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:37.282 [2024-11-20 15:39:23.145724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.706 ms 00:27:37.282 [2024-11-20 15:39:23.145747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.145951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.145970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:37.282 [2024-11-20 15:39:23.145982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:37.282 [2024-11-20 15:39:23.145992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.183014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.183052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:37.282 [2024-11-20 15:39:23.183065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.003 ms 00:27:37.282 [2024-11-20 15:39:23.183075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.282 [2024-11-20 15:39:23.219754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.282 [2024-11-20 15:39:23.219790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:37.282 [2024-11-20 15:39:23.219803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.604 ms 00:27:37.282 [2024-11-20 15:39:23.219813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.540 [2024-11-20 15:39:23.255717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.540 [2024-11-20 15:39:23.255752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:37.540 [2024-11-20 15:39:23.255764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.848 ms 00:27:37.540 [2024-11-20 15:39:23.255790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.540 [2024-11-20 15:39:23.291770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.540 [2024-11-20 15:39:23.291806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:37.540 [2024-11-20 15:39:23.291818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.896 ms 00:27:37.540 [2024-11-20 15:39:23.291827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.540 [2024-11-20 15:39:23.291901] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:37.540 [2024-11-20 15:39:23.291922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.291996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:37.540 [2024-11-20 15:39:23.292385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.292991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.293001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.293012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:37.541 [2024-11-20 15:39:23.293030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:37.541 [2024-11-20 15:39:23.293040] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:37.541 [2024-11-20 15:39:23.293051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:37.541 [2024-11-20 15:39:23.293061] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:37.541 [2024-11-20 15:39:23.293071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:37.541 [2024-11-20 15:39:23.293081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:37.541 [2024-11-20 15:39:23.293090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:37.541 [2024-11-20 15:39:23.293100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:37.541 [2024-11-20 15:39:23.293110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:37.541 [2024-11-20 15:39:23.293119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:37.541 [2024-11-20 15:39:23.293128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:37.541 [2024-11-20 15:39:23.293139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.541 [2024-11-20 15:39:23.293149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:37.541 [2024-11-20 15:39:23.293164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:27:37.541 [2024-11-20 15:39:23.293174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.311484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.541 [2024-11-20 15:39:23.311697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:37.541 [2024-11-20 15:39:23.311719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.288 ms 00:27:37.541 [2024-11-20 15:39:23.311736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.312307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.541 [2024-11-20 15:39:23.312330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:37.541 [2024-11-20 15:39:23.312342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:27:37.541 [2024-11-20 15:39:23.312352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.367532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.541 [2024-11-20 15:39:23.367580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.541 [2024-11-20 15:39:23.367594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.541 [2024-11-20 15:39:23.367605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.367700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.541 [2024-11-20 15:39:23.367722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.541 [2024-11-20 15:39:23.367733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.541 [2024-11-20 15:39:23.367743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.367792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.541 [2024-11-20 15:39:23.367804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.541 [2024-11-20 15:39:23.367815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.541 [2024-11-20 15:39:23.367826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.367844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.541 [2024-11-20 15:39:23.367855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.541 [2024-11-20 15:39:23.367869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.541 [2024-11-20 15:39:23.367879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.541 [2024-11-20 15:39:23.491136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.541 [2024-11-20 15:39:23.491434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:37.541 [2024-11-20 15:39:23.491463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.541 [2024-11-20 15:39:23.491475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.592797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.592860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:37.799 [2024-11-20 15:39:23.592874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.592901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.592998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:37.799 [2024-11-20 15:39:23.593021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.593031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.593060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:37.799 [2024-11-20 15:39:23.593080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.593093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.593225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:37.799 [2024-11-20 15:39:23.593250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.593260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.593295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:37.799 [2024-11-20 15:39:23.593318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.593328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.593371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:37.799 [2024-11-20 15:39:23.593392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.799 [2024-11-20 15:39:23.593402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.799 [2024-11-20 15:39:23.593445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.799 [2024-11-20 15:39:23.593457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:37.800 [2024-11-20 15:39:23.593467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.800 [2024-11-20 15:39:23.593480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.800 [2024-11-20 15:39:23.593657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.515 ms, result 0 00:27:39.175 00:27:39.175 00:27:39.175 15:39:24 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78699 00:27:39.175 15:39:24 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:39.175 15:39:24 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78699 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78699 ']' 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.175 15:39:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:39.175 [2024-11-20 15:39:24.967834] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:39.175 [2024-11-20 15:39:24.968002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78699 ] 00:27:39.433 [2024-11-20 15:39:25.160299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.433 [2024-11-20 15:39:25.277872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.369 15:39:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.369 15:39:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:40.369 15:39:26 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:40.628 [2024-11-20 15:39:26.407171] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.628 [2024-11-20 15:39:26.407234] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.887 [2024-11-20 15:39:26.590926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.591165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.887 [2024-11-20 15:39:26.591197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:40.887 [2024-11-20 15:39:26.591208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.595123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.595162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.887 [2024-11-20 15:39:26.595177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:27:40.887 [2024-11-20 15:39:26.595188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.595297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.887 [2024-11-20 15:39:26.596311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.887 [2024-11-20 15:39:26.596347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.596359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.887 [2024-11-20 15:39:26.596372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:27:40.887 [2024-11-20 15:39:26.596382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.597894] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:40.887 [2024-11-20 15:39:26.617803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.617849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:40.887 [2024-11-20 15:39:26.617865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.914 ms 00:27:40.887 [2024-11-20 15:39:26.617881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.617981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.618001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:40.887 [2024-11-20 15:39:26.618012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:40.887 [2024-11-20 15:39:26.618041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.624750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.624933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.887 [2024-11-20 15:39:26.624954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.650 ms 00:27:40.887 [2024-11-20 15:39:26.624970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.625116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.625136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.887 [2024-11-20 15:39:26.625148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:40.887 [2024-11-20 15:39:26.625163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.625200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.625217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.887 [2024-11-20 15:39:26.625228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:40.887 [2024-11-20 15:39:26.625243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.625273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:40.887 [2024-11-20 15:39:26.630128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.630175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.887 [2024-11-20 15:39:26.630194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:27:40.887 [2024-11-20 15:39:26.630205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.630282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.887 [2024-11-20 15:39:26.630296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.887 [2024-11-20 15:39:26.630312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:40.887 [2024-11-20 15:39:26.630327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.887 [2024-11-20 15:39:26.630355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:40.887 [2024-11-20 15:39:26.630379] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:40.887 [2024-11-20 15:39:26.630428] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:40.887 [2024-11-20 15:39:26.630448] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:40.887 [2024-11-20 15:39:26.630542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.887 [2024-11-20 15:39:26.630564] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.887 [2024-11-20 15:39:26.630626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:40.887 [2024-11-20 15:39:26.630640] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.887 [2024-11-20 15:39:26.630658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.887 [2024-11-20 15:39:26.630669] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:40.887 [2024-11-20 15:39:26.630685] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.887 [2024-11-20 15:39:26.630695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.888 [2024-11-20 15:39:26.630715] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.888 [2024-11-20 15:39:26.630726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.888 [2024-11-20 15:39:26.630742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.888 [2024-11-20 15:39:26.630753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:27:40.888 [2024-11-20 15:39:26.630768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.888 [2024-11-20 15:39:26.630851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.888 [2024-11-20 15:39:26.630868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.888 [2024-11-20 15:39:26.630878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:40.888 [2024-11-20 15:39:26.630893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.888 [2024-11-20 15:39:26.630985] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.888 [2024-11-20 15:39:26.631003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.888 [2024-11-20 15:39:26.631014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.888 [2024-11-20 15:39:26.631054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.888 [2024-11-20 15:39:26.631095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.888 [2024-11-20 15:39:26.631119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.888 [2024-11-20 15:39:26.631134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:40.888 [2024-11-20 15:39:26.631144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.888 [2024-11-20 15:39:26.631159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.888 [2024-11-20 15:39:26.631169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:40.888 [2024-11-20 15:39:26.631183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.888 [2024-11-20 15:39:26.631209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.888 [2024-11-20 15:39:26.631255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.888 [2024-11-20 15:39:26.631299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.888 [2024-11-20 15:39:26.631333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.888 [2024-11-20 15:39:26.631371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.888 [2024-11-20 15:39:26.631405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.888 [2024-11-20 15:39:26.631431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.888 [2024-11-20 15:39:26.631445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:40.888 [2024-11-20 15:39:26.631455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.888 [2024-11-20 15:39:26.631469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.888 [2024-11-20 15:39:26.631478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:40.888 [2024-11-20 15:39:26.631496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.888 [2024-11-20 15:39:26.631520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:40.888 [2024-11-20 15:39:26.631530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.888 [2024-11-20 15:39:26.631560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.888 [2024-11-20 15:39:26.631587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.888 [2024-11-20 15:39:26.631613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.888 [2024-11-20 15:39:26.631623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.888 [2024-11-20 15:39:26.631637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.888 [2024-11-20 15:39:26.631647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.888 [2024-11-20 15:39:26.631661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.888 [2024-11-20 15:39:26.631671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.888 [2024-11-20 15:39:26.631686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.888 [2024-11-20 15:39:26.631699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:40.888 [2024-11-20 15:39:26.631731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:40.888 [2024-11-20 15:39:26.631748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:40.888 [2024-11-20 15:39:26.631759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:40.888 [2024-11-20 15:39:26.631774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:40.888 [2024-11-20 15:39:26.631785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:40.888 [2024-11-20 15:39:26.631800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:40.888 [2024-11-20 15:39:26.631811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:40.888 [2024-11-20 15:39:26.631827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:40.888 [2024-11-20 15:39:26.631838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:40.888 [2024-11-20 15:39:26.631904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.888 [2024-11-20 15:39:26.631916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.888 [2024-11-20 15:39:26.631947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.888 [2024-11-20 15:39:26.631962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.888 [2024-11-20 15:39:26.631973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.888 [2024-11-20 15:39:26.631989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.888 [2024-11-20 15:39:26.632003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.888 [2024-11-20 15:39:26.632018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:27:40.888 [2024-11-20 15:39:26.632028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.888 [2024-11-20 15:39:26.674255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.888 [2024-11-20 15:39:26.674297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.888 [2024-11-20 15:39:26.674317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.153 ms 00:27:40.888 [2024-11-20 15:39:26.674333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.888 [2024-11-20 15:39:26.674482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.888 [2024-11-20 15:39:26.674495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.888 [2024-11-20 15:39:26.674511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:40.888 [2024-11-20 15:39:26.674522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.888 [2024-11-20 15:39:26.724470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.724519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.889 [2024-11-20 15:39:26.724555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.914 ms 00:27:40.889 [2024-11-20 15:39:26.724567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.724706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.724720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.889 [2024-11-20 15:39:26.724737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:40.889 [2024-11-20 15:39:26.724747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.725188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.725208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.889 [2024-11-20 15:39:26.725231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:27:40.889 [2024-11-20 15:39:26.725242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.725368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.725387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.889 [2024-11-20 15:39:26.725403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:40.889 [2024-11-20 15:39:26.725413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.747480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.747522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.889 [2024-11-20 15:39:26.747542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.035 ms 00:27:40.889 [2024-11-20 15:39:26.747553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.779538] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:40.889 [2024-11-20 15:39:26.779592] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:40.889 [2024-11-20 15:39:26.779611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.779623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:40.889 [2024-11-20 15:39:26.779642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.907 ms 00:27:40.889 [2024-11-20 15:39:26.779652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.809896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.809938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:40.889 [2024-11-20 15:39:26.809974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.142 ms 00:27:40.889 [2024-11-20 15:39:26.809986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.889 [2024-11-20 15:39:26.828914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.889 [2024-11-20 15:39:26.828953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:40.889 [2024-11-20 15:39:26.828977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.833 ms 00:27:40.889 [2024-11-20 15:39:26.828988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.847510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.847546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:41.147 [2024-11-20 15:39:26.847565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.421 ms 00:27:41.147 [2024-11-20 15:39:26.847605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.848481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.848515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:41.147 [2024-11-20 15:39:26.848533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:27:41.147 [2024-11-20 15:39:26.848544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.937034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.937297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:41.147 [2024-11-20 15:39:26.937334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.435 ms 00:27:41.147 [2024-11-20 15:39:26.937346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.948869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:41.147 [2024-11-20 15:39:26.965369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.965427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:41.147 [2024-11-20 15:39:26.965451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.885 ms 00:27:41.147 [2024-11-20 15:39:26.965466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.965592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.965613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:41.147 [2024-11-20 15:39:26.965626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:41.147 [2024-11-20 15:39:26.965641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.147 [2024-11-20 15:39:26.965696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.147 [2024-11-20 15:39:26.965713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:41.148 [2024-11-20 15:39:26.965724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:41.148 [2024-11-20 15:39:26.965747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.148 [2024-11-20 15:39:26.965774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.148 [2024-11-20 15:39:26.965790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:41.148 [2024-11-20 15:39:26.965800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:41.148 [2024-11-20 15:39:26.965815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.148 [2024-11-20 15:39:26.965858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:41.148 [2024-11-20 15:39:26.965881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.148 [2024-11-20 15:39:26.965891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:41.148 [2024-11-20 15:39:26.965912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:41.148 [2024-11-20 15:39:26.965922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.148 [2024-11-20 15:39:27.003112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.148 [2024-11-20 15:39:27.003269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:41.148 [2024-11-20 15:39:27.003302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.148 ms 00:27:41.148 [2024-11-20 15:39:27.003313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.148 [2024-11-20 15:39:27.003497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.148 [2024-11-20 15:39:27.003512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:41.148 [2024-11-20 15:39:27.003529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:41.148 [2024-11-20 15:39:27.003545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.148 [2024-11-20 15:39:27.004595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:41.148 [2024-11-20 15:39:27.009018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.272 ms, result 0 00:27:41.148 [2024-11-20 15:39:27.010201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:41.148 Some configs were skipped because the RPC state that can call them passed over. 00:27:41.148 15:39:27 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:41.406 [2024-11-20 15:39:27.302988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.406 [2024-11-20 15:39:27.303277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:41.406 [2024-11-20 15:39:27.303448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:27:41.406 [2024-11-20 15:39:27.303510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.406 [2024-11-20 15:39:27.303714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.335 ms, result 0 00:27:41.406 true 00:27:41.406 15:39:27 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:41.665 [2024-11-20 15:39:27.566965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.665 [2024-11-20 15:39:27.567018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:41.665 [2024-11-20 15:39:27.567041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:27:41.665 [2024-11-20 15:39:27.567053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.665 [2024-11-20 15:39:27.567105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.402 ms, result 0 00:27:41.665 true 00:27:41.665 15:39:27 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78699 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78699 ']' 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78699 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78699 00:27:41.665 killing process with pid 78699 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78699' 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78699 00:27:41.665 15:39:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78699 00:27:43.042 [2024-11-20 15:39:28.757101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.757160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:43.042 [2024-11-20 15:39:28.757176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:43.042 [2024-11-20 15:39:28.757190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.757215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:43.042 [2024-11-20 15:39:28.761526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.761558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:43.042 [2024-11-20 15:39:28.761584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.291 ms 00:27:43.042 [2024-11-20 15:39:28.761595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.761845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.761858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:43.042 [2024-11-20 15:39:28.761871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:27:43.042 [2024-11-20 15:39:28.761881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.767088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.767127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:43.042 [2024-11-20 15:39:28.767144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:27:43.042 [2024-11-20 15:39:28.767155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.772997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.773032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:43.042 [2024-11-20 15:39:28.773047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.800 ms 00:27:43.042 [2024-11-20 15:39:28.773058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.788497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.788532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:43.042 [2024-11-20 15:39:28.788552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:27:43.042 [2024-11-20 15:39:28.788587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.798977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.799162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:43.042 [2024-11-20 15:39:28.799204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.332 ms 00:27:43.042 [2024-11-20 15:39:28.799215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.799354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.799368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:43.042 [2024-11-20 15:39:28.799382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:43.042 [2024-11-20 15:39:28.799393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.815284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.815334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:43.042 [2024-11-20 15:39:28.815353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.866 ms 00:27:43.042 [2024-11-20 15:39:28.815363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.830774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.830808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:43.042 [2024-11-20 15:39:28.830833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.347 ms 00:27:43.042 [2024-11-20 15:39:28.830843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.042 [2024-11-20 15:39:28.845794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.042 [2024-11-20 15:39:28.845936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:43.042 [2024-11-20 15:39:28.845969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.891 ms 00:27:43.043 [2024-11-20 15:39:28.845979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.043 [2024-11-20 15:39:28.860832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.043 [2024-11-20 15:39:28.860976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:43.043 [2024-11-20 15:39:28.861006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.740 ms 00:27:43.043 [2024-11-20 15:39:28.861017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.043 [2024-11-20 15:39:28.861075] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:43.043 [2024-11-20 15:39:28.861093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.861986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:43.043 [2024-11-20 15:39:28.862220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:43.044 [2024-11-20 15:39:28.862464] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:43.044 [2024-11-20 15:39:28.862489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:43.044 [2024-11-20 15:39:28.862513] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:43.044 [2024-11-20 15:39:28.862535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:43.044 [2024-11-20 15:39:28.862545] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:43.044 [2024-11-20 15:39:28.862578] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:43.044 [2024-11-20 15:39:28.862589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:43.044 [2024-11-20 15:39:28.862604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:43.044 [2024-11-20 15:39:28.862614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:43.044 [2024-11-20 15:39:28.862628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:43.044 [2024-11-20 15:39:28.862638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:43.044 [2024-11-20 15:39:28.862652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.044 [2024-11-20 15:39:28.862663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:43.044 [2024-11-20 15:39:28.862679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:27:43.044 [2024-11-20 15:39:28.862689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.884092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.044 [2024-11-20 15:39:28.884254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:43.044 [2024-11-20 15:39:28.884290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.365 ms 00:27:43.044 [2024-11-20 15:39:28.884301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.884929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.044 [2024-11-20 15:39:28.884946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:43.044 [2024-11-20 15:39:28.884963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:27:43.044 [2024-11-20 15:39:28.884978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.956846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.044 [2024-11-20 15:39:28.956911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:43.044 [2024-11-20 15:39:28.956929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.044 [2024-11-20 15:39:28.956940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.957090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.044 [2024-11-20 15:39:28.957104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:43.044 [2024-11-20 15:39:28.957118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.044 [2024-11-20 15:39:28.957132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.957195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.044 [2024-11-20 15:39:28.957209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:43.044 [2024-11-20 15:39:28.957225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.044 [2024-11-20 15:39:28.957235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.044 [2024-11-20 15:39:28.957256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.044 [2024-11-20 15:39:28.957268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:43.044 [2024-11-20 15:39:28.957281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.044 [2024-11-20 15:39:28.957291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.087276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.087465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:43.302 [2024-11-20 15:39:29.087499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.087511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.192858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.192922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:43.302 [2024-11-20 15:39:29.192960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.192987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:43.302 [2024-11-20 15:39:29.193145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:43.302 [2024-11-20 15:39:29.193215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:43.302 [2024-11-20 15:39:29.193379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:43.302 [2024-11-20 15:39:29.193461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:43.302 [2024-11-20 15:39:29.193546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.302 [2024-11-20 15:39:29.193657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:43.302 [2024-11-20 15:39:29.193669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.302 [2024-11-20 15:39:29.193699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.302 [2024-11-20 15:39:29.193841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.711 ms, result 0 00:27:44.679 15:39:30 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:44.679 15:39:30 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:44.679 [2024-11-20 15:39:30.377221] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:44.679 [2024-11-20 15:39:30.377402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78777 ] 00:27:44.679 [2024-11-20 15:39:30.570539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.937 [2024-11-20 15:39:30.685717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.196 [2024-11-20 15:39:31.060972] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.196 [2024-11-20 15:39:31.061043] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.471 [2024-11-20 15:39:31.223249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.223436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.471 [2024-11-20 15:39:31.223462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:45.471 [2024-11-20 15:39:31.223474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.226649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.226685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.471 [2024-11-20 15:39:31.226698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:27:45.471 [2024-11-20 15:39:31.226709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.226808] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.471 [2024-11-20 15:39:31.227798] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.471 [2024-11-20 15:39:31.227832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.227843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.471 [2024-11-20 15:39:31.227855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:27:45.471 [2024-11-20 15:39:31.227864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.229452] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.471 [2024-11-20 15:39:31.249037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.249081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.471 [2024-11-20 15:39:31.249095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.585 ms 00:27:45.471 [2024-11-20 15:39:31.249106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.249209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.249223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.471 [2024-11-20 15:39:31.249235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:45.471 [2024-11-20 15:39:31.249245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.255972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.256145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.471 [2024-11-20 15:39:31.256167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.685 ms 00:27:45.471 [2024-11-20 15:39:31.256178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.256297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.256311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.471 [2024-11-20 15:39:31.256323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:45.471 [2024-11-20 15:39:31.256333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.256363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.256377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.471 [2024-11-20 15:39:31.256388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:45.471 [2024-11-20 15:39:31.256398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.256422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:45.471 [2024-11-20 15:39:31.261453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.261486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.471 [2024-11-20 15:39:31.261499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.037 ms 00:27:45.471 [2024-11-20 15:39:31.261508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.261588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.471 [2024-11-20 15:39:31.261619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.471 [2024-11-20 15:39:31.261630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:45.471 [2024-11-20 15:39:31.261640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.471 [2024-11-20 15:39:31.261664] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.471 [2024-11-20 15:39:31.261690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.471 [2024-11-20 15:39:31.261726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.471 [2024-11-20 15:39:31.261743] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.471 [2024-11-20 15:39:31.261835] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.471 [2024-11-20 15:39:31.261848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.471 [2024-11-20 15:39:31.261862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.472 [2024-11-20 15:39:31.261875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.472 [2024-11-20 15:39:31.261891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.472 [2024-11-20 15:39:31.261902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:45.472 [2024-11-20 15:39:31.261911] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.472 [2024-11-20 15:39:31.261922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.472 [2024-11-20 15:39:31.261932] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.472 [2024-11-20 15:39:31.261942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.472 [2024-11-20 15:39:31.261952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.472 [2024-11-20 15:39:31.261962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:45.472 [2024-11-20 15:39:31.261972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.472 [2024-11-20 15:39:31.262049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.472 [2024-11-20 15:39:31.262064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.472 [2024-11-20 15:39:31.262074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:45.472 [2024-11-20 15:39:31.262084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.472 [2024-11-20 15:39:31.262176] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.472 [2024-11-20 15:39:31.262189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.472 [2024-11-20 15:39:31.262200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.472 [2024-11-20 15:39:31.262231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.472 [2024-11-20 15:39:31.262260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.472 [2024-11-20 15:39:31.262279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.472 [2024-11-20 15:39:31.262288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:45.472 [2024-11-20 15:39:31.262297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.472 [2024-11-20 15:39:31.262318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.472 [2024-11-20 15:39:31.262328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:45.472 [2024-11-20 15:39:31.262338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.472 [2024-11-20 15:39:31.262356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.472 [2024-11-20 15:39:31.262384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.472 [2024-11-20 15:39:31.262413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.472 [2024-11-20 15:39:31.262441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.472 [2024-11-20 15:39:31.262468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.472 [2024-11-20 15:39:31.262496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.472 [2024-11-20 15:39:31.262514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.472 [2024-11-20 15:39:31.262524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:45.472 [2024-11-20 15:39:31.262533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.472 [2024-11-20 15:39:31.262542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.472 [2024-11-20 15:39:31.262560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:45.472 [2024-11-20 15:39:31.262581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.472 [2024-11-20 15:39:31.262601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:45.472 [2024-11-20 15:39:31.262612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.472 [2024-11-20 15:39:31.262631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.472 [2024-11-20 15:39:31.262641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.472 [2024-11-20 15:39:31.262665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.472 [2024-11-20 15:39:31.262674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.472 [2024-11-20 15:39:31.262684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.472 [2024-11-20 15:39:31.262693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.472 [2024-11-20 15:39:31.262703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.472 [2024-11-20 15:39:31.262712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.472 [2024-11-20 15:39:31.262723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.472 [2024-11-20 15:39:31.262735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.472 [2024-11-20 15:39:31.262747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:45.472 [2024-11-20 15:39:31.262757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:45.472 [2024-11-20 15:39:31.262767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:45.472 [2024-11-20 15:39:31.262777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:45.472 [2024-11-20 15:39:31.262787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:45.472 [2024-11-20 15:39:31.262798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:45.472 [2024-11-20 15:39:31.262808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:45.472 [2024-11-20 15:39:31.262818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:45.472 [2024-11-20 15:39:31.262828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:45.472 [2024-11-20 15:39:31.262838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:45.472 [2024-11-20 15:39:31.262848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:45.473 [2024-11-20 15:39:31.262858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:45.473 [2024-11-20 15:39:31.262868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:45.473 [2024-11-20 15:39:31.262878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:45.473 [2024-11-20 15:39:31.262888] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.473 [2024-11-20 15:39:31.262899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.473 [2024-11-20 15:39:31.262910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.473 [2024-11-20 15:39:31.262920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.473 [2024-11-20 15:39:31.262931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.473 [2024-11-20 15:39:31.262941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.473 [2024-11-20 15:39:31.262952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.262962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.473 [2024-11-20 15:39:31.262977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:27:45.473 [2024-11-20 15:39:31.262987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.304300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.304345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.473 [2024-11-20 15:39:31.304361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.255 ms 00:27:45.473 [2024-11-20 15:39:31.304372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.304520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.304539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.473 [2024-11-20 15:39:31.304551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:45.473 [2024-11-20 15:39:31.304560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.385848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.385911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.473 [2024-11-20 15:39:31.385935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.243 ms 00:27:45.473 [2024-11-20 15:39:31.385957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.386131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.386152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.473 [2024-11-20 15:39:31.386170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:45.473 [2024-11-20 15:39:31.386186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.386751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.386774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.473 [2024-11-20 15:39:31.386792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:27:45.473 [2024-11-20 15:39:31.386817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.473 [2024-11-20 15:39:31.387014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.473 [2024-11-20 15:39:31.387037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.473 [2024-11-20 15:39:31.387053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:27:45.473 [2024-11-20 15:39:31.387069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.416001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.416058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.781 [2024-11-20 15:39:31.416080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.895 ms 00:27:45.781 [2024-11-20 15:39:31.416097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.446804] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:45.781 [2024-11-20 15:39:31.446863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:45.781 [2024-11-20 15:39:31.446887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.446905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:45.781 [2024-11-20 15:39:31.446923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.606 ms 00:27:45.781 [2024-11-20 15:39:31.446939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.495493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.495586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:45.781 [2024-11-20 15:39:31.495609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.417 ms 00:27:45.781 [2024-11-20 15:39:31.495626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.520950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.521002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:45.781 [2024-11-20 15:39:31.521016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:27:45.781 [2024-11-20 15:39:31.521043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.539424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.539465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:45.781 [2024-11-20 15:39:31.539478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.297 ms 00:27:45.781 [2024-11-20 15:39:31.539488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.540238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.540281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:45.781 [2024-11-20 15:39:31.540294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:27:45.781 [2024-11-20 15:39:31.540305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.629591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.629654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:45.781 [2024-11-20 15:39:31.629671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.254 ms 00:27:45.781 [2024-11-20 15:39:31.629681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.640903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:45.781 [2024-11-20 15:39:31.657664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.657729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:45.781 [2024-11-20 15:39:31.657747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.833 ms 00:27:45.781 [2024-11-20 15:39:31.657765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.657886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.657900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:45.781 [2024-11-20 15:39:31.657913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:45.781 [2024-11-20 15:39:31.657923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.657979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.657991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:45.781 [2024-11-20 15:39:31.658002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:45.781 [2024-11-20 15:39:31.658012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.658052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.658066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:45.781 [2024-11-20 15:39:31.658077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:45.781 [2024-11-20 15:39:31.658087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.658126] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:45.781 [2024-11-20 15:39:31.658138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.658148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:45.781 [2024-11-20 15:39:31.658159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:45.781 [2024-11-20 15:39:31.658169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-20 15:39:31.695611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-20 15:39:31.695655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:45.781 [2024-11-20 15:39:31.695669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.418 ms 00:27:45.781 [2024-11-20 15:39:31.695680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.782 [2024-11-20 15:39:31.695802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.782 [2024-11-20 15:39:31.695817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:45.782 [2024-11-20 15:39:31.695829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:45.782 [2024-11-20 15:39:31.695839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.782 [2024-11-20 15:39:31.696824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:45.782 [2024-11-20 15:39:31.701039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 473.217 ms, result 0 00:27:45.782 [2024-11-20 15:39:31.701957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:45.782 [2024-11-20 15:39:31.720786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:47.158  [2024-11-20T15:39:34.053Z] Copying: 33/256 [MB] (33 MBps) [2024-11-20T15:39:34.988Z] Copying: 63/256 [MB] (30 MBps) [2024-11-20T15:39:35.925Z] Copying: 93/256 [MB] (30 MBps) [2024-11-20T15:39:36.863Z] Copying: 122/256 [MB] (29 MBps) [2024-11-20T15:39:37.799Z] Copying: 152/256 [MB] (29 MBps) [2024-11-20T15:39:38.733Z] Copying: 181/256 [MB] (29 MBps) [2024-11-20T15:39:40.109Z] Copying: 211/256 [MB] (29 MBps) [2024-11-20T15:39:40.367Z] Copying: 241/256 [MB] (29 MBps) [2024-11-20T15:39:40.367Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-20 15:39:40.232623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:54.409 [2024-11-20 15:39:40.247114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.409 [2024-11-20 15:39:40.247156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:54.410 [2024-11-20 15:39:40.247172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:54.410 [2024-11-20 15:39:40.247193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.247215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:54.410 [2024-11-20 15:39:40.251306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.251483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:54.410 [2024-11-20 15:39:40.251505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.075 ms 00:27:54.410 [2024-11-20 15:39:40.251515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.251767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.251782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:54.410 [2024-11-20 15:39:40.251793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:27:54.410 [2024-11-20 15:39:40.251803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.254715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.254756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:54.410 [2024-11-20 15:39:40.254767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:27:54.410 [2024-11-20 15:39:40.254778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.260373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.260398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:54.410 [2024-11-20 15:39:40.260409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.576 ms 00:27:54.410 [2024-11-20 15:39:40.260419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.295286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.295320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:54.410 [2024-11-20 15:39:40.295332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.806 ms 00:27:54.410 [2024-11-20 15:39:40.295341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.316017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.316061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:54.410 [2024-11-20 15:39:40.316096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.620 ms 00:27:54.410 [2024-11-20 15:39:40.316106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.316234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.316247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:54.410 [2024-11-20 15:39:40.316258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:54.410 [2024-11-20 15:39:40.316267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.410 [2024-11-20 15:39:40.353606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.410 [2024-11-20 15:39:40.353639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:54.410 [2024-11-20 15:39:40.353653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.306 ms 00:27:54.410 [2024-11-20 15:39:40.353662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.670 [2024-11-20 15:39:40.389790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.670 [2024-11-20 15:39:40.389823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:54.670 [2024-11-20 15:39:40.389835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.068 ms 00:27:54.670 [2024-11-20 15:39:40.389844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.670 [2024-11-20 15:39:40.424581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.670 [2024-11-20 15:39:40.424614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:54.670 [2024-11-20 15:39:40.424626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.683 ms 00:27:54.670 [2024-11-20 15:39:40.424634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.670 [2024-11-20 15:39:40.459375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.670 [2024-11-20 15:39:40.459408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:54.670 [2024-11-20 15:39:40.459420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.658 ms 00:27:54.670 [2024-11-20 15:39:40.459430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.670 [2024-11-20 15:39:40.459485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:54.670 [2024-11-20 15:39:40.459502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:54.670 [2024-11-20 15:39:40.459617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.459996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:54.671 [2024-11-20 15:39:40.460390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:54.672 [2024-11-20 15:39:40.460605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:54.672 [2024-11-20 15:39:40.460615] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:54.672 [2024-11-20 15:39:40.460626] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:54.672 [2024-11-20 15:39:40.460636] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:54.672 [2024-11-20 15:39:40.460645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:54.672 [2024-11-20 15:39:40.460655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:54.672 [2024-11-20 15:39:40.460665] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:54.672 [2024-11-20 15:39:40.460675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:54.672 [2024-11-20 15:39:40.460686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:54.672 [2024-11-20 15:39:40.460694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:54.672 [2024-11-20 15:39:40.460703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:54.672 [2024-11-20 15:39:40.460713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.672 [2024-11-20 15:39:40.460730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:54.672 [2024-11-20 15:39:40.460741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:27:54.672 [2024-11-20 15:39:40.460751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.480773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.672 [2024-11-20 15:39:40.480803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:54.672 [2024-11-20 15:39:40.480815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.001 ms 00:27:54.672 [2024-11-20 15:39:40.480825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.481462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.672 [2024-11-20 15:39:40.481480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:54.672 [2024-11-20 15:39:40.481491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:27:54.672 [2024-11-20 15:39:40.481501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.534839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.672 [2024-11-20 15:39:40.534870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:54.672 [2024-11-20 15:39:40.534899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.672 [2024-11-20 15:39:40.534909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.534985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.672 [2024-11-20 15:39:40.534996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:54.672 [2024-11-20 15:39:40.535007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.672 [2024-11-20 15:39:40.535016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.535076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.672 [2024-11-20 15:39:40.535090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:54.672 [2024-11-20 15:39:40.535100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.672 [2024-11-20 15:39:40.535110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.672 [2024-11-20 15:39:40.535128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.672 [2024-11-20 15:39:40.535146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:54.672 [2024-11-20 15:39:40.535156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.672 [2024-11-20 15:39:40.535166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.931 [2024-11-20 15:39:40.655237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.931 [2024-11-20 15:39:40.655290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.931 [2024-11-20 15:39:40.655321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.931 [2024-11-20 15:39:40.655331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.931 [2024-11-20 15:39:40.752462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.931 [2024-11-20 15:39:40.752513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.931 [2024-11-20 15:39:40.752528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.931 [2024-11-20 15:39:40.752538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.931 [2024-11-20 15:39:40.752647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.931 [2024-11-20 15:39:40.752660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.931 [2024-11-20 15:39:40.752671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.931 [2024-11-20 15:39:40.752681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.752708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.932 [2024-11-20 15:39:40.752719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.932 [2024-11-20 15:39:40.752735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.932 [2024-11-20 15:39:40.752745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.752850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.932 [2024-11-20 15:39:40.752863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.932 [2024-11-20 15:39:40.752874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.932 [2024-11-20 15:39:40.752884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.752919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.932 [2024-11-20 15:39:40.752932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:54.932 [2024-11-20 15:39:40.752942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.932 [2024-11-20 15:39:40.752956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.752994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.932 [2024-11-20 15:39:40.753005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.932 [2024-11-20 15:39:40.753015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.932 [2024-11-20 15:39:40.753026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.753069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.932 [2024-11-20 15:39:40.753080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.932 [2024-11-20 15:39:40.753094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.932 [2024-11-20 15:39:40.753104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-20 15:39:40.753239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.116 ms, result 0 00:27:55.868 00:27:55.868 00:27:55.868 15:39:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:27:55.868 15:39:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:56.462 15:39:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:56.462 [2024-11-20 15:39:42.366226] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:56.462 [2024-11-20 15:39:42.366585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78898 ] 00:27:56.721 [2024-11-20 15:39:42.537620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.721 [2024-11-20 15:39:42.646669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.291 [2024-11-20 15:39:42.980138] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.291 [2024-11-20 15:39:42.980207] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.291 [2024-11-20 15:39:43.141833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.141886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:57.291 [2024-11-20 15:39:43.141901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.291 [2024-11-20 15:39:43.141928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.145090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.145129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.291 [2024-11-20 15:39:43.145141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:27:57.291 [2024-11-20 15:39:43.145151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.145261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:57.291 [2024-11-20 15:39:43.146243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:57.291 [2024-11-20 15:39:43.146278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.146290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.291 [2024-11-20 15:39:43.146302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:27:57.291 [2024-11-20 15:39:43.146312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.147827] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:57.291 [2024-11-20 15:39:43.166108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.166149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:57.291 [2024-11-20 15:39:43.166164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:27:57.291 [2024-11-20 15:39:43.166173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.166290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.166305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:57.291 [2024-11-20 15:39:43.166316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:57.291 [2024-11-20 15:39:43.166326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.173090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.173121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.291 [2024-11-20 15:39:43.173149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.722 ms 00:27:57.291 [2024-11-20 15:39:43.173159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.173264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.173279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.291 [2024-11-20 15:39:43.173290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:57.291 [2024-11-20 15:39:43.173300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.173329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.173344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:57.291 [2024-11-20 15:39:43.173355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.291 [2024-11-20 15:39:43.173365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.173400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:57.291 [2024-11-20 15:39:43.178077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.178111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.291 [2024-11-20 15:39:43.178123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:27:57.291 [2024-11-20 15:39:43.178133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.178201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.178214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:57.291 [2024-11-20 15:39:43.178224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:57.291 [2024-11-20 15:39:43.178234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.178254] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:57.291 [2024-11-20 15:39:43.178279] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:57.291 [2024-11-20 15:39:43.178315] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:57.291 [2024-11-20 15:39:43.178332] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:57.291 [2024-11-20 15:39:43.178437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:57.291 [2024-11-20 15:39:43.178450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:57.291 [2024-11-20 15:39:43.178463] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:57.291 [2024-11-20 15:39:43.178477] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:57.291 [2024-11-20 15:39:43.178492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:57.291 [2024-11-20 15:39:43.178504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:57.291 [2024-11-20 15:39:43.178514] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:57.291 [2024-11-20 15:39:43.178524] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:57.291 [2024-11-20 15:39:43.178534] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:57.291 [2024-11-20 15:39:43.178545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.178565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:57.291 [2024-11-20 15:39:43.178587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:27:57.291 [2024-11-20 15:39:43.178597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.178675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.291 [2024-11-20 15:39:43.178690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:57.291 [2024-11-20 15:39:43.178701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:57.291 [2024-11-20 15:39:43.178710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.291 [2024-11-20 15:39:43.178803] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:57.291 [2024-11-20 15:39:43.178816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:57.291 [2024-11-20 15:39:43.178826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.291 [2024-11-20 15:39:43.178837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.291 [2024-11-20 15:39:43.178847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:57.291 [2024-11-20 15:39:43.178856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:57.291 [2024-11-20 15:39:43.178866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:57.292 [2024-11-20 15:39:43.178875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:57.292 [2024-11-20 15:39:43.178885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:57.292 [2024-11-20 15:39:43.178894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.292 [2024-11-20 15:39:43.178903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:57.292 [2024-11-20 15:39:43.178912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:57.292 [2024-11-20 15:39:43.178922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.292 [2024-11-20 15:39:43.178944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:57.292 [2024-11-20 15:39:43.178954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:57.292 [2024-11-20 15:39:43.178963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.292 [2024-11-20 15:39:43.178972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:57.292 [2024-11-20 15:39:43.178981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:57.292 [2024-11-20 15:39:43.178990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:57.292 [2024-11-20 15:39:43.179009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:57.292 [2024-11-20 15:39:43.179037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:57.292 [2024-11-20 15:39:43.179064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:57.292 [2024-11-20 15:39:43.179092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:57.292 [2024-11-20 15:39:43.179119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.292 [2024-11-20 15:39:43.179137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:57.292 [2024-11-20 15:39:43.179147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:57.292 [2024-11-20 15:39:43.179156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.292 [2024-11-20 15:39:43.179165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:57.292 [2024-11-20 15:39:43.179174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:57.292 [2024-11-20 15:39:43.179183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:57.292 [2024-11-20 15:39:43.179201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:57.292 [2024-11-20 15:39:43.179211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179219] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:57.292 [2024-11-20 15:39:43.179230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:57.292 [2024-11-20 15:39:43.179243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.292 [2024-11-20 15:39:43.179267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:57.292 [2024-11-20 15:39:43.179276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:57.292 [2024-11-20 15:39:43.179286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:57.292 [2024-11-20 15:39:43.179295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:57.292 [2024-11-20 15:39:43.179305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:57.292 [2024-11-20 15:39:43.179314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:57.292 [2024-11-20 15:39:43.179326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:57.292 [2024-11-20 15:39:43.179338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:57.292 [2024-11-20 15:39:43.179361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:57.292 [2024-11-20 15:39:43.179371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:57.292 [2024-11-20 15:39:43.179381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:57.292 [2024-11-20 15:39:43.179392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:57.292 [2024-11-20 15:39:43.179402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:57.292 [2024-11-20 15:39:43.179413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:57.292 [2024-11-20 15:39:43.179424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:57.292 [2024-11-20 15:39:43.179434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:57.292 [2024-11-20 15:39:43.179444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:57.292 [2024-11-20 15:39:43.179495] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:57.292 [2024-11-20 15:39:43.179506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.292 [2024-11-20 15:39:43.179528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:57.292 [2024-11-20 15:39:43.179539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:57.292 [2024-11-20 15:39:43.179549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:57.292 [2024-11-20 15:39:43.179560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.292 [2024-11-20 15:39:43.179587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:57.292 [2024-11-20 15:39:43.179603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:27:57.292 [2024-11-20 15:39:43.179612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.292 [2024-11-20 15:39:43.219821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.292 [2024-11-20 15:39:43.219861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.292 [2024-11-20 15:39:43.219875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.151 ms 00:27:57.292 [2024-11-20 15:39:43.219902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.292 [2024-11-20 15:39:43.220034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.292 [2024-11-20 15:39:43.220052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:57.292 [2024-11-20 15:39:43.220063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:57.292 [2024-11-20 15:39:43.220074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.285288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.285331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.553 [2024-11-20 15:39:43.285346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.189 ms 00:27:57.553 [2024-11-20 15:39:43.285377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.285488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.285502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.553 [2024-11-20 15:39:43.285513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:57.553 [2024-11-20 15:39:43.285523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.285973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.285994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.553 [2024-11-20 15:39:43.286005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:27:57.553 [2024-11-20 15:39:43.286022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.286140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.286167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.553 [2024-11-20 15:39:43.286178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:57.553 [2024-11-20 15:39:43.286189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.305882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.305921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.553 [2024-11-20 15:39:43.305936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.669 ms 00:27:57.553 [2024-11-20 15:39:43.305963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.325433] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:57.553 [2024-11-20 15:39:43.325474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.553 [2024-11-20 15:39:43.325489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.325500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.553 [2024-11-20 15:39:43.325528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.384 ms 00:27:57.553 [2024-11-20 15:39:43.325539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.356206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.356263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.553 [2024-11-20 15:39:43.356278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.568 ms 00:27:57.553 [2024-11-20 15:39:43.356289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.375196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.375239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.553 [2024-11-20 15:39:43.375253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:27:57.553 [2024-11-20 15:39:43.375264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.393687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.393726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:57.553 [2024-11-20 15:39:43.393739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.338 ms 00:27:57.553 [2024-11-20 15:39:43.393749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.394748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.394781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:57.553 [2024-11-20 15:39:43.394794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:27:57.553 [2024-11-20 15:39:43.394804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.481394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.553 [2024-11-20 15:39:43.481451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:57.553 [2024-11-20 15:39:43.481484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.557 ms 00:27:57.553 [2024-11-20 15:39:43.481495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.553 [2024-11-20 15:39:43.493730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:57.813 [2024-11-20 15:39:43.511645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.813 [2024-11-20 15:39:43.511712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.813 [2024-11-20 15:39:43.511729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.014 ms 00:27:57.813 [2024-11-20 15:39:43.511747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.813 [2024-11-20 15:39:43.511909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.813 [2024-11-20 15:39:43.511922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.813 [2024-11-20 15:39:43.511934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:57.813 [2024-11-20 15:39:43.511944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.813 [2024-11-20 15:39:43.512004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.813 [2024-11-20 15:39:43.512017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.813 [2024-11-20 15:39:43.512027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:57.813 [2024-11-20 15:39:43.512037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.813 [2024-11-20 15:39:43.512077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.813 [2024-11-20 15:39:43.512090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.813 [2024-11-20 15:39:43.512101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:57.813 [2024-11-20 15:39:43.512111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.813 [2024-11-20 15:39:43.512149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.813 [2024-11-20 15:39:43.512167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.813 [2024-11-20 15:39:43.512178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.813 [2024-11-20 15:39:43.512189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:57.813 [2024-11-20 15:39:43.512198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.813 [2024-11-20 15:39:43.548364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.548406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.814 [2024-11-20 15:39:43.548437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.143 ms 00:27:57.814 [2024-11-20 15:39:43.548447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.548566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.548594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.814 [2024-11-20 15:39:43.548606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:57.814 [2024-11-20 15:39:43.548616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.549689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.814 [2024-11-20 15:39:43.553887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.543 ms, result 0 00:27:57.814 [2024-11-20 15:39:43.554712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.814 [2024-11-20 15:39:43.573050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.814  [2024-11-20T15:39:43.772Z] Copying: 4096/4096 [kB] (average 28 MBps)[2024-11-20 15:39:43.718732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.814 [2024-11-20 15:39:43.732894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.732933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:57.814 [2024-11-20 15:39:43.732946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:57.814 [2024-11-20 15:39:43.732978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.733000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:57.814 [2024-11-20 15:39:43.737194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.737224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:57.814 [2024-11-20 15:39:43.737252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:27:57.814 [2024-11-20 15:39:43.737261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.739175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.739214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:57.814 [2024-11-20 15:39:43.739227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.890 ms 00:27:57.814 [2024-11-20 15:39:43.739237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.742424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.742460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:57.814 [2024-11-20 15:39:43.742472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.169 ms 00:27:57.814 [2024-11-20 15:39:43.742498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.814 [2024-11-20 15:39:43.748210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.814 [2024-11-20 15:39:43.748243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:57.814 [2024-11-20 15:39:43.748271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.682 ms 00:27:57.814 [2024-11-20 15:39:43.748280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.785300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.785337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:58.074 [2024-11-20 15:39:43.785350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.956 ms 00:27:58.074 [2024-11-20 15:39:43.785375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.806711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.806754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:58.074 [2024-11-20 15:39:43.806787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.280 ms 00:27:58.074 [2024-11-20 15:39:43.806798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.806929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.806942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.074 [2024-11-20 15:39:43.806953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:58.074 [2024-11-20 15:39:43.806963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.842846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.842884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:58.074 [2024-11-20 15:39:43.842896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.853 ms 00:27:58.074 [2024-11-20 15:39:43.842921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.879365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.879405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:58.074 [2024-11-20 15:39:43.879418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.389 ms 00:27:58.074 [2024-11-20 15:39:43.879427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.916174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.916212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.074 [2024-11-20 15:39:43.916226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.690 ms 00:27:58.074 [2024-11-20 15:39:43.916235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.952256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.074 [2024-11-20 15:39:43.952310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.074 [2024-11-20 15:39:43.952323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.922 ms 00:27:58.074 [2024-11-20 15:39:43.952333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.074 [2024-11-20 15:39:43.952388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.074 [2024-11-20 15:39:43.952406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.074 [2024-11-20 15:39:43.952716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.952992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.075 [2024-11-20 15:39:43.953515] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.075 [2024-11-20 15:39:43.953525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:27:58.075 [2024-11-20 15:39:43.953537] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:58.075 [2024-11-20 15:39:43.953547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:58.075 [2024-11-20 15:39:43.953556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:58.075 [2024-11-20 15:39:43.953566] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:58.075 [2024-11-20 15:39:43.953585] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.075 [2024-11-20 15:39:43.953596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.075 [2024-11-20 15:39:43.953606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.075 [2024-11-20 15:39:43.953615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.075 [2024-11-20 15:39:43.953624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.075 [2024-11-20 15:39:43.953634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.075 [2024-11-20 15:39:43.953648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.075 [2024-11-20 15:39:43.953658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:27:58.075 [2024-11-20 15:39:43.953669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.075 [2024-11-20 15:39:43.973062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.075 [2024-11-20 15:39:43.973095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.075 [2024-11-20 15:39:43.973107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.372 ms 00:27:58.075 [2024-11-20 15:39:43.973117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.076 [2024-11-20 15:39:43.973753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.076 [2024-11-20 15:39:43.973776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.076 [2024-11-20 15:39:43.973787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:27:58.076 [2024-11-20 15:39:43.973797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.076 [2024-11-20 15:39:44.029331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.076 [2024-11-20 15:39:44.029368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.076 [2024-11-20 15:39:44.029381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.076 [2024-11-20 15:39:44.029408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.076 [2024-11-20 15:39:44.029489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.076 [2024-11-20 15:39:44.029501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.076 [2024-11-20 15:39:44.029511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.076 [2024-11-20 15:39:44.029521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.076 [2024-11-20 15:39:44.029574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.076 [2024-11-20 15:39:44.029598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.076 [2024-11-20 15:39:44.029609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.076 [2024-11-20 15:39:44.029620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.076 [2024-11-20 15:39:44.029640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.076 [2024-11-20 15:39:44.029655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.076 [2024-11-20 15:39:44.029665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.076 [2024-11-20 15:39:44.029675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.156532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.156615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.335 [2024-11-20 15:39:44.156631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.156642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.260522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.260611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.335 [2024-11-20 15:39:44.260627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.260638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.260736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.260748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.335 [2024-11-20 15:39:44.260759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.260769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.260798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.260810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.335 [2024-11-20 15:39:44.260826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.260836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.260951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.260965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.335 [2024-11-20 15:39:44.260975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.260985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.261023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.261036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:58.335 [2024-11-20 15:39:44.261051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.261061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.261099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.261111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.335 [2024-11-20 15:39:44.261121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.261132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.261175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.335 [2024-11-20 15:39:44.261186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.335 [2024-11-20 15:39:44.261200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.335 [2024-11-20 15:39:44.261210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.335 [2024-11-20 15:39:44.261346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.438 ms, result 0 00:27:59.713 00:27:59.713 00:27:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.713 15:39:45 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78929 00:27:59.713 15:39:45 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:59.713 15:39:45 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78929 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78929 ']' 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.713 15:39:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:59.713 [2024-11-20 15:39:45.484287] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:59.713 [2024-11-20 15:39:45.484456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78929 ] 00:27:59.971 [2024-11-20 15:39:45.677666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.971 [2024-11-20 15:39:45.793149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.906 15:39:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.906 15:39:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:00.906 15:39:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:01.165 [2024-11-20 15:39:46.919841] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.165 [2024-11-20 15:39:46.919913] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.165 [2024-11-20 15:39:47.105725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.165 [2024-11-20 15:39:47.105778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:01.165 [2024-11-20 15:39:47.105796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.165 [2024-11-20 15:39:47.105823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.165 [2024-11-20 15:39:47.109396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.165 [2024-11-20 15:39:47.109438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.165 [2024-11-20 15:39:47.109452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.551 ms 00:28:01.165 [2024-11-20 15:39:47.109462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.165 [2024-11-20 15:39:47.109597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:01.165 [2024-11-20 15:39:47.110676] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:01.165 [2024-11-20 15:39:47.110724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.165 [2024-11-20 15:39:47.110744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.166 [2024-11-20 15:39:47.110757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:28:01.166 [2024-11-20 15:39:47.110767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.166 [2024-11-20 15:39:47.112300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:01.425 [2024-11-20 15:39:47.131070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.425 [2024-11-20 15:39:47.131137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:01.426 [2024-11-20 15:39:47.131169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:28:01.426 [2024-11-20 15:39:47.131186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.131287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.131305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:01.426 [2024-11-20 15:39:47.131317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:01.426 [2024-11-20 15:39:47.131330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.138099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.138135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.426 [2024-11-20 15:39:47.138147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.718 ms 00:28:01.426 [2024-11-20 15:39:47.138175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.138309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.138328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.426 [2024-11-20 15:39:47.138340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:01.426 [2024-11-20 15:39:47.138355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.138391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.138407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:01.426 [2024-11-20 15:39:47.138418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:01.426 [2024-11-20 15:39:47.138432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.138459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:01.426 [2024-11-20 15:39:47.143289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.143326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.426 [2024-11-20 15:39:47.143344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.832 ms 00:28:01.426 [2024-11-20 15:39:47.143354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.143434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.143447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:01.426 [2024-11-20 15:39:47.143463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:01.426 [2024-11-20 15:39:47.143479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.143506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:01.426 [2024-11-20 15:39:47.143530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:01.426 [2024-11-20 15:39:47.143595] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:01.426 [2024-11-20 15:39:47.143616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:01.426 [2024-11-20 15:39:47.143713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:01.426 [2024-11-20 15:39:47.143727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:01.426 [2024-11-20 15:39:47.143753] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:01.426 [2024-11-20 15:39:47.143767] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:01.426 [2024-11-20 15:39:47.143784] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:01.426 [2024-11-20 15:39:47.143796] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:01.426 [2024-11-20 15:39:47.143811] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:01.426 [2024-11-20 15:39:47.143821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:01.426 [2024-11-20 15:39:47.143840] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:01.426 [2024-11-20 15:39:47.143851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.143866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:01.426 [2024-11-20 15:39:47.143877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:28:01.426 [2024-11-20 15:39:47.143892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.143974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.426 [2024-11-20 15:39:47.143991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:01.426 [2024-11-20 15:39:47.144001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:01.426 [2024-11-20 15:39:47.144016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.426 [2024-11-20 15:39:47.144107] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:01.426 [2024-11-20 15:39:47.144126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:01.426 [2024-11-20 15:39:47.144138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:01.426 [2024-11-20 15:39:47.144178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:01.426 [2024-11-20 15:39:47.144219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.426 [2024-11-20 15:39:47.144243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:01.426 [2024-11-20 15:39:47.144257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:01.426 [2024-11-20 15:39:47.144267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.426 [2024-11-20 15:39:47.144282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:01.426 [2024-11-20 15:39:47.144292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:01.426 [2024-11-20 15:39:47.144306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:01.426 [2024-11-20 15:39:47.144329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:01.426 [2024-11-20 15:39:47.144373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:01.426 [2024-11-20 15:39:47.144416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:01.426 [2024-11-20 15:39:47.144450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:01.426 [2024-11-20 15:39:47.144488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:01.426 [2024-11-20 15:39:47.144522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.426 [2024-11-20 15:39:47.144547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:01.426 [2024-11-20 15:39:47.144561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:01.426 [2024-11-20 15:39:47.144587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.426 [2024-11-20 15:39:47.144602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:01.426 [2024-11-20 15:39:47.144612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:01.426 [2024-11-20 15:39:47.144635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:01.426 [2024-11-20 15:39:47.144659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:01.426 [2024-11-20 15:39:47.144668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144682] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:01.426 [2024-11-20 15:39:47.144698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:01.426 [2024-11-20 15:39:47.144713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.426 [2024-11-20 15:39:47.144738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:01.426 [2024-11-20 15:39:47.144748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:01.426 [2024-11-20 15:39:47.144763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:01.426 [2024-11-20 15:39:47.144773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:01.426 [2024-11-20 15:39:47.144787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:01.426 [2024-11-20 15:39:47.144797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:01.426 [2024-11-20 15:39:47.144812] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:01.426 [2024-11-20 15:39:47.144825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.144846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:01.427 [2024-11-20 15:39:47.144857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:01.427 [2024-11-20 15:39:47.144873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:01.427 [2024-11-20 15:39:47.144884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:01.427 [2024-11-20 15:39:47.144899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:01.427 [2024-11-20 15:39:47.144910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:01.427 [2024-11-20 15:39:47.144925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:01.427 [2024-11-20 15:39:47.144936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:01.427 [2024-11-20 15:39:47.144950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:01.427 [2024-11-20 15:39:47.144961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.144974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.144984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.144996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.145007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:01.427 [2024-11-20 15:39:47.145019] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:01.427 [2024-11-20 15:39:47.145031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.145047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:01.427 [2024-11-20 15:39:47.145058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:01.427 [2024-11-20 15:39:47.145071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:01.427 [2024-11-20 15:39:47.145082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:01.427 [2024-11-20 15:39:47.145095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.145107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:01.427 [2024-11-20 15:39:47.145119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:28:01.427 [2024-11-20 15:39:47.145129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.183455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.183497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.427 [2024-11-20 15:39:47.183532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.256 ms 00:28:01.427 [2024-11-20 15:39:47.183549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.183702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.183717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:01.427 [2024-11-20 15:39:47.183733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:01.427 [2024-11-20 15:39:47.183743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.226209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.226251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.427 [2024-11-20 15:39:47.226267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.432 ms 00:28:01.427 [2024-11-20 15:39:47.226277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.226390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.226402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.427 [2024-11-20 15:39:47.226416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:01.427 [2024-11-20 15:39:47.226425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.226893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.227057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.427 [2024-11-20 15:39:47.227085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:28:01.427 [2024-11-20 15:39:47.227096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.227225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.227238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.427 [2024-11-20 15:39:47.227252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:01.427 [2024-11-20 15:39:47.227262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.247598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.247633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.427 [2024-11-20 15:39:47.247649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.309 ms 00:28:01.427 [2024-11-20 15:39:47.247675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.281852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:01.427 [2024-11-20 15:39:47.281891] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:01.427 [2024-11-20 15:39:47.281910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.281920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:01.427 [2024-11-20 15:39:47.281934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.107 ms 00:28:01.427 [2024-11-20 15:39:47.281944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.310516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.310560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:01.427 [2024-11-20 15:39:47.310589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.487 ms 00:28:01.427 [2024-11-20 15:39:47.310616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.328473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.328508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:01.427 [2024-11-20 15:39:47.328527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.755 ms 00:28:01.427 [2024-11-20 15:39:47.328536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.346342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.346376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:01.427 [2024-11-20 15:39:47.346391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:28:01.427 [2024-11-20 15:39:47.346400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.427 [2024-11-20 15:39:47.347178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.427 [2024-11-20 15:39:47.347212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:01.427 [2024-11-20 15:39:47.347227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:28:01.427 [2024-11-20 15:39:47.347238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.686 [2024-11-20 15:39:47.431851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.686 [2024-11-20 15:39:47.431917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:01.686 [2024-11-20 15:39:47.431937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.579 ms 00:28:01.686 [2024-11-20 15:39:47.431948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.686 [2024-11-20 15:39:47.442886] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:01.686 [2024-11-20 15:39:47.459085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.459146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:01.687 [2024-11-20 15:39:47.459181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.021 ms 00:28:01.687 [2024-11-20 15:39:47.459194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.459305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.459321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:01.687 [2024-11-20 15:39:47.459332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:01.687 [2024-11-20 15:39:47.459345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.459399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.459413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:01.687 [2024-11-20 15:39:47.459424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:01.687 [2024-11-20 15:39:47.459439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.459464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.459478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:01.687 [2024-11-20 15:39:47.459488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.687 [2024-11-20 15:39:47.459500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.459538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:01.687 [2024-11-20 15:39:47.459556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.459566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:01.687 [2024-11-20 15:39:47.459607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:01.687 [2024-11-20 15:39:47.459618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.495610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.495650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:01.687 [2024-11-20 15:39:47.495666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.942 ms 00:28:01.687 [2024-11-20 15:39:47.495676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.495788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.687 [2024-11-20 15:39:47.495800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:01.687 [2024-11-20 15:39:47.495813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:01.687 [2024-11-20 15:39:47.495826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.687 [2024-11-20 15:39:47.496837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.687 [2024-11-20 15:39:47.501233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.768 ms, result 0 00:28:01.687 [2024-11-20 15:39:47.502484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.687 Some configs were skipped because the RPC state that can call them passed over. 00:28:01.687 15:39:47 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:01.945 [2024-11-20 15:39:47.793827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.945 [2024-11-20 15:39:47.793900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:01.945 [2024-11-20 15:39:47.793919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:28:01.945 [2024-11-20 15:39:47.793935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.945 [2024-11-20 15:39:47.793980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.525 ms, result 0 00:28:01.945 true 00:28:01.945 15:39:47 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:02.204 [2024-11-20 15:39:47.973891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.204 [2024-11-20 15:39:47.973945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:02.204 [2024-11-20 15:39:47.973970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:28:02.204 [2024-11-20 15:39:47.973982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.204 [2024-11-20 15:39:47.974037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.371 ms, result 0 00:28:02.204 true 00:28:02.204 15:39:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78929 00:28:02.204 15:39:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78929 ']' 00:28:02.204 15:39:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78929 00:28:02.204 15:39:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:02.204 15:39:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.204 15:39:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78929 00:28:02.204 killing process with pid 78929 00:28:02.204 15:39:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.204 15:39:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.204 15:39:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78929' 00:28:02.204 15:39:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78929 00:28:02.204 15:39:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78929 00:28:03.206 [2024-11-20 15:39:49.136044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.206 [2024-11-20 15:39:49.136110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:03.206 [2024-11-20 15:39:49.136127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:03.206 [2024-11-20 15:39:49.136140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.206 [2024-11-20 15:39:49.136167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:03.206 [2024-11-20 15:39:49.140651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.206 [2024-11-20 15:39:49.140686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:03.206 [2024-11-20 15:39:49.140704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:28:03.206 [2024-11-20 15:39:49.140714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.206 [2024-11-20 15:39:49.140962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.206 [2024-11-20 15:39:49.140975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:03.206 [2024-11-20 15:39:49.140988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:28:03.206 [2024-11-20 15:39:49.140998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.206 [2024-11-20 15:39:49.144420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.206 [2024-11-20 15:39:49.144458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:03.206 [2024-11-20 15:39:49.144476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.399 ms 00:28:03.206 [2024-11-20 15:39:49.144486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.206 [2024-11-20 15:39:49.150069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.206 [2024-11-20 15:39:49.150100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:03.206 [2024-11-20 15:39:49.150114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.539 ms 00:28:03.206 [2024-11-20 15:39:49.150139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.164560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.164605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:03.465 [2024-11-20 15:39:49.164623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.362 ms 00:28:03.465 [2024-11-20 15:39:49.164658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.174920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.174960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:03.465 [2024-11-20 15:39:49.174975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.198 ms 00:28:03.465 [2024-11-20 15:39:49.175002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.175131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.175145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:03.465 [2024-11-20 15:39:49.175158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:03.465 [2024-11-20 15:39:49.175168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.190345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.190378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:03.465 [2024-11-20 15:39:49.190393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.153 ms 00:28:03.465 [2024-11-20 15:39:49.190402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.205263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.205309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:03.465 [2024-11-20 15:39:49.205332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.798 ms 00:28:03.465 [2024-11-20 15:39:49.205341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.220315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.220497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:03.465 [2024-11-20 15:39:49.220533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.913 ms 00:28:03.465 [2024-11-20 15:39:49.220544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.235441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.465 [2024-11-20 15:39:49.235599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:03.465 [2024-11-20 15:39:49.235631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.790 ms 00:28:03.465 [2024-11-20 15:39:49.235642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.465 [2024-11-20 15:39:49.235700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:03.465 [2024-11-20 15:39:49.235718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.235995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:03.465 [2024-11-20 15:39:49.236513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.236989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.237006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:03.466 [2024-11-20 15:39:49.237025] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:03.466 [2024-11-20 15:39:49.237050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:28:03.466 [2024-11-20 15:39:49.237073] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:03.466 [2024-11-20 15:39:49.237095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:03.466 [2024-11-20 15:39:49.237105] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:03.466 [2024-11-20 15:39:49.237120] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:03.466 [2024-11-20 15:39:49.237129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:03.466 [2024-11-20 15:39:49.237144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:03.466 [2024-11-20 15:39:49.237154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:03.466 [2024-11-20 15:39:49.237169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:03.466 [2024-11-20 15:39:49.237178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:03.466 [2024-11-20 15:39:49.237190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.466 [2024-11-20 15:39:49.237201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:03.466 [2024-11-20 15:39:49.237214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:28:03.466 [2024-11-20 15:39:49.237224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.258213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.466 [2024-11-20 15:39:49.258248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:03.466 [2024-11-20 15:39:49.258274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.958 ms 00:28:03.466 [2024-11-20 15:39:49.258285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.258845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.466 [2024-11-20 15:39:49.258865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:03.466 [2024-11-20 15:39:49.258882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:28:03.466 [2024-11-20 15:39:49.258898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.329104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.466 [2024-11-20 15:39:49.329143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.466 [2024-11-20 15:39:49.329162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.466 [2024-11-20 15:39:49.329173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.329273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.466 [2024-11-20 15:39:49.329286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.466 [2024-11-20 15:39:49.329301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.466 [2024-11-20 15:39:49.329316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.329372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.466 [2024-11-20 15:39:49.329385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.466 [2024-11-20 15:39:49.329405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.466 [2024-11-20 15:39:49.329415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.466 [2024-11-20 15:39:49.329440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.466 [2024-11-20 15:39:49.329451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.466 [2024-11-20 15:39:49.329466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.466 [2024-11-20 15:39:49.329476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.454888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.455081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.725 [2024-11-20 15:39:49.455116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.455128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.553909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.553963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.725 [2024-11-20 15:39:49.553980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.553993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.725 [2024-11-20 15:39:49.554112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.725 [2024-11-20 15:39:49.554174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.725 [2024-11-20 15:39:49.554323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:03.725 [2024-11-20 15:39:49.554395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.725 [2024-11-20 15:39:49.554472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.725 [2024-11-20 15:39:49.554538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.725 [2024-11-20 15:39:49.554550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.725 [2024-11-20 15:39:49.554591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.725 [2024-11-20 15:39:49.554764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 418.676 ms, result 0 00:28:04.678 15:39:50 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:04.935 [2024-11-20 15:39:50.706372] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:04.936 [2024-11-20 15:39:50.706547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78997 ] 00:28:05.194 [2024-11-20 15:39:50.896592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.194 [2024-11-20 15:39:51.001796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.452 [2024-11-20 15:39:51.352809] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.452 [2024-11-20 15:39:51.352880] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.711 [2024-11-20 15:39:51.515123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.515378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:05.711 [2024-11-20 15:39:51.515417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:05.711 [2024-11-20 15:39:51.515429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.518820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.518861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.711 [2024-11-20 15:39:51.518875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:28:05.711 [2024-11-20 15:39:51.518894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.519007] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:05.711 [2024-11-20 15:39:51.519982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:05.711 [2024-11-20 15:39:51.520017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.520029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.711 [2024-11-20 15:39:51.520041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:28:05.711 [2024-11-20 15:39:51.520051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.521546] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:05.711 [2024-11-20 15:39:51.540718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.540901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:05.711 [2024-11-20 15:39:51.540926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.173 ms 00:28:05.711 [2024-11-20 15:39:51.540937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.541039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.541054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:05.711 [2024-11-20 15:39:51.541066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:05.711 [2024-11-20 15:39:51.541076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.547881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.548056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.711 [2024-11-20 15:39:51.548076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.762 ms 00:28:05.711 [2024-11-20 15:39:51.548087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.548198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.548212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.711 [2024-11-20 15:39:51.548224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:05.711 [2024-11-20 15:39:51.548234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.548267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.548283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.711 [2024-11-20 15:39:51.548294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:05.711 [2024-11-20 15:39:51.548304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.548329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:05.711 [2024-11-20 15:39:51.553200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.553233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.711 [2024-11-20 15:39:51.553245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:28:05.711 [2024-11-20 15:39:51.553255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.553324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.553336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.711 [2024-11-20 15:39:51.553348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:05.711 [2024-11-20 15:39:51.553357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.553377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:05.711 [2024-11-20 15:39:51.553414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:05.711 [2024-11-20 15:39:51.553450] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:05.711 [2024-11-20 15:39:51.553468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:05.711 [2024-11-20 15:39:51.553557] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.711 [2024-11-20 15:39:51.553589] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.711 [2024-11-20 15:39:51.553619] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:05.711 [2024-11-20 15:39:51.553632] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.711 [2024-11-20 15:39:51.553649] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.711 [2024-11-20 15:39:51.553660] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:05.711 [2024-11-20 15:39:51.553671] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.711 [2024-11-20 15:39:51.553680] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.711 [2024-11-20 15:39:51.553690] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.711 [2024-11-20 15:39:51.553700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.553711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.711 [2024-11-20 15:39:51.553721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:28:05.711 [2024-11-20 15:39:51.553731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.553809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.711 [2024-11-20 15:39:51.553825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:05.711 [2024-11-20 15:39:51.553835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:05.711 [2024-11-20 15:39:51.553845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.711 [2024-11-20 15:39:51.553938] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:05.711 [2024-11-20 15:39:51.553952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:05.711 [2024-11-20 15:39:51.553962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.711 [2024-11-20 15:39:51.553973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.711 [2024-11-20 15:39:51.553983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:05.711 [2024-11-20 15:39:51.553993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:05.711 [2024-11-20 15:39:51.554021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.711 [2024-11-20 15:39:51.554040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:05.711 [2024-11-20 15:39:51.554049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:05.711 [2024-11-20 15:39:51.554058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.711 [2024-11-20 15:39:51.554079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:05.711 [2024-11-20 15:39:51.554088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:05.711 [2024-11-20 15:39:51.554099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:05.711 [2024-11-20 15:39:51.554119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:05.711 [2024-11-20 15:39:51.554147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:05.711 [2024-11-20 15:39:51.554176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:05.711 [2024-11-20 15:39:51.554204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:05.711 [2024-11-20 15:39:51.554231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.711 [2024-11-20 15:39:51.554250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:05.711 [2024-11-20 15:39:51.554259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:05.711 [2024-11-20 15:39:51.554269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.711 [2024-11-20 15:39:51.554278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:05.711 [2024-11-20 15:39:51.554287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:05.712 [2024-11-20 15:39:51.554296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.712 [2024-11-20 15:39:51.554305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:05.712 [2024-11-20 15:39:51.554315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:05.712 [2024-11-20 15:39:51.554324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.712 [2024-11-20 15:39:51.554333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:05.712 [2024-11-20 15:39:51.554342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:05.712 [2024-11-20 15:39:51.554351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.712 [2024-11-20 15:39:51.554360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:05.712 [2024-11-20 15:39:51.554370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:05.712 [2024-11-20 15:39:51.554380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.712 [2024-11-20 15:39:51.554394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.712 [2024-11-20 15:39:51.554404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:05.712 [2024-11-20 15:39:51.554414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:05.712 [2024-11-20 15:39:51.554423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:05.712 [2024-11-20 15:39:51.554433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:05.712 [2024-11-20 15:39:51.554442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:05.712 [2024-11-20 15:39:51.554451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:05.712 [2024-11-20 15:39:51.554462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:05.712 [2024-11-20 15:39:51.554474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:05.712 [2024-11-20 15:39:51.554496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:05.712 [2024-11-20 15:39:51.554506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:05.712 [2024-11-20 15:39:51.554516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:05.712 [2024-11-20 15:39:51.554526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:05.712 [2024-11-20 15:39:51.554537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:05.712 [2024-11-20 15:39:51.554547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:05.712 [2024-11-20 15:39:51.554566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:05.712 [2024-11-20 15:39:51.554589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:05.712 [2024-11-20 15:39:51.554599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:05.712 [2024-11-20 15:39:51.554653] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:05.712 [2024-11-20 15:39:51.554665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.712 [2024-11-20 15:39:51.554687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:05.712 [2024-11-20 15:39:51.554698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:05.712 [2024-11-20 15:39:51.554708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:05.712 [2024-11-20 15:39:51.554720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.554730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:05.712 [2024-11-20 15:39:51.554744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:28:05.712 [2024-11-20 15:39:51.554754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.593046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.593086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:05.712 [2024-11-20 15:39:51.593100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.235 ms 00:28:05.712 [2024-11-20 15:39:51.593127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.593261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.593278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:05.712 [2024-11-20 15:39:51.593290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:05.712 [2024-11-20 15:39:51.593300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.650538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.650597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:05.712 [2024-11-20 15:39:51.650613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.213 ms 00:28:05.712 [2024-11-20 15:39:51.650628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.650736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.650750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:05.712 [2024-11-20 15:39:51.650761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:05.712 [2024-11-20 15:39:51.650771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.651208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.651229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:05.712 [2024-11-20 15:39:51.651241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:28:05.712 [2024-11-20 15:39:51.651258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.712 [2024-11-20 15:39:51.651378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.712 [2024-11-20 15:39:51.651392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:05.712 [2024-11-20 15:39:51.651402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:05.712 [2024-11-20 15:39:51.651413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.671345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.671383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:05.969 [2024-11-20 15:39:51.671397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.910 ms 00:28:05.969 [2024-11-20 15:39:51.671408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.690960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:05.969 [2024-11-20 15:39:51.691001] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:05.969 [2024-11-20 15:39:51.691018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.691029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:05.969 [2024-11-20 15:39:51.691041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.487 ms 00:28:05.969 [2024-11-20 15:39:51.691052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.721579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.721634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:05.969 [2024-11-20 15:39:51.721648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.435 ms 00:28:05.969 [2024-11-20 15:39:51.721659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.739985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.740024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:05.969 [2024-11-20 15:39:51.740038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.239 ms 00:28:05.969 [2024-11-20 15:39:51.740048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.758836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.758999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:05.969 [2024-11-20 15:39:51.759021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.708 ms 00:28:05.969 [2024-11-20 15:39:51.759032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.760006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.760039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:05.969 [2024-11-20 15:39:51.760052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:28:05.969 [2024-11-20 15:39:51.760062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.849293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.849347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:05.969 [2024-11-20 15:39:51.849364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.200 ms 00:28:05.969 [2024-11-20 15:39:51.849391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.860825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:05.969 [2024-11-20 15:39:51.877703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.877761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:05.969 [2024-11-20 15:39:51.877778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.187 ms 00:28:05.969 [2024-11-20 15:39:51.877796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.877946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.877960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:05.969 [2024-11-20 15:39:51.877972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:05.969 [2024-11-20 15:39:51.877983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.878039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.878051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:05.969 [2024-11-20 15:39:51.878062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:05.969 [2024-11-20 15:39:51.878072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.878112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.878126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:05.969 [2024-11-20 15:39:51.878137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:05.969 [2024-11-20 15:39:51.878147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.878183] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:05.969 [2024-11-20 15:39:51.878196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.878206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:05.969 [2024-11-20 15:39:51.878216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:05.969 [2024-11-20 15:39:51.878226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.915778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.915821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:05.969 [2024-11-20 15:39:51.915836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.528 ms 00:28:05.969 [2024-11-20 15:39:51.915847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.915967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.969 [2024-11-20 15:39:51.915982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:05.969 [2024-11-20 15:39:51.915994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:05.969 [2024-11-20 15:39:51.916005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.969 [2024-11-20 15:39:51.916944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:05.969 [2024-11-20 15:39:51.921452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.468 ms, result 0 00:28:05.969 [2024-11-20 15:39:51.922304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:06.226 [2024-11-20 15:39:51.941445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.160  [2024-11-20T15:39:54.054Z] Copying: 33/256 [MB] (33 MBps) [2024-11-20T15:39:55.431Z] Copying: 63/256 [MB] (30 MBps) [2024-11-20T15:39:56.368Z] Copying: 93/256 [MB] (30 MBps) [2024-11-20T15:39:57.305Z] Copying: 123/256 [MB] (29 MBps) [2024-11-20T15:39:58.240Z] Copying: 153/256 [MB] (30 MBps) [2024-11-20T15:39:59.171Z] Copying: 183/256 [MB] (29 MBps) [2024-11-20T15:40:00.103Z] Copying: 213/256 [MB] (29 MBps) [2024-11-20T15:40:00.670Z] Copying: 243/256 [MB] (29 MBps) [2024-11-20T15:40:00.929Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-20 15:40:00.863544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.971 [2024-11-20 15:40:00.884478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.971 [2024-11-20 15:40:00.884672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:14.971 [2024-11-20 15:40:00.884769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:14.971 [2024-11-20 15:40:00.884795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.971 [2024-11-20 15:40:00.884838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:14.971 [2024-11-20 15:40:00.889062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.971 [2024-11-20 15:40:00.889093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:14.971 [2024-11-20 15:40:00.889106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:28:14.971 [2024-11-20 15:40:00.889115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.971 [2024-11-20 15:40:00.889346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.971 [2024-11-20 15:40:00.889358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:14.971 [2024-11-20 15:40:00.889369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:28:14.971 [2024-11-20 15:40:00.889379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.971 [2024-11-20 15:40:00.892267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.971 [2024-11-20 15:40:00.892418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:14.971 [2024-11-20 15:40:00.892441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:28:14.971 [2024-11-20 15:40:00.892451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.971 [2024-11-20 15:40:00.898007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.971 [2024-11-20 15:40:00.898037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:14.971 [2024-11-20 15:40:00.898049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.524 ms 00:28:14.971 [2024-11-20 15:40:00.898075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:00.934525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:00.934579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:15.231 [2024-11-20 15:40:00.934611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.380 ms 00:28:15.231 [2024-11-20 15:40:00.934621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:00.955666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:00.955824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:15.231 [2024-11-20 15:40:00.955853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.983 ms 00:28:15.231 [2024-11-20 15:40:00.955865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:00.956024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:00.956040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:15.231 [2024-11-20 15:40:00.956052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:15.231 [2024-11-20 15:40:00.956062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:00.993640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:00.993679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:15.231 [2024-11-20 15:40:00.993692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.546 ms 00:28:15.231 [2024-11-20 15:40:00.993702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:01.028314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:01.028349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:15.231 [2024-11-20 15:40:01.028362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.550 ms 00:28:15.231 [2024-11-20 15:40:01.028388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:01.062838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:01.063011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:15.231 [2024-11-20 15:40:01.063032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.389 ms 00:28:15.231 [2024-11-20 15:40:01.063042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:01.098177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.231 [2024-11-20 15:40:01.098214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:15.231 [2024-11-20 15:40:01.098227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.011 ms 00:28:15.231 [2024-11-20 15:40:01.098236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.231 [2024-11-20 15:40:01.098292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:15.231 [2024-11-20 15:40:01.098308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:15.231 [2024-11-20 15:40:01.098391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.098995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:15.232 [2024-11-20 15:40:01.099359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:15.233 [2024-11-20 15:40:01.099370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:15.233 [2024-11-20 15:40:01.099380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:15.233 [2024-11-20 15:40:01.099398] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:15.233 [2024-11-20 15:40:01.099408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3adbce50-96c5-4eda-b128-33d3af6d2f46 00:28:15.233 [2024-11-20 15:40:01.099419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:15.233 [2024-11-20 15:40:01.099429] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:15.233 [2024-11-20 15:40:01.099439] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:15.233 [2024-11-20 15:40:01.099449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:15.233 [2024-11-20 15:40:01.099459] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:15.233 [2024-11-20 15:40:01.099469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:15.233 [2024-11-20 15:40:01.099478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:15.233 [2024-11-20 15:40:01.099488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:15.233 [2024-11-20 15:40:01.099497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:15.233 [2024-11-20 15:40:01.099507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.233 [2024-11-20 15:40:01.099521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:15.233 [2024-11-20 15:40:01.099532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:28:15.233 [2024-11-20 15:40:01.099542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.118917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.233 [2024-11-20 15:40:01.118949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:15.233 [2024-11-20 15:40:01.118962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:28:15.233 [2024-11-20 15:40:01.118971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.119515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.233 [2024-11-20 15:40:01.119530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:15.233 [2024-11-20 15:40:01.119541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:28:15.233 [2024-11-20 15:40:01.119550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.174002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.233 [2024-11-20 15:40:01.174037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:15.233 [2024-11-20 15:40:01.174050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.233 [2024-11-20 15:40:01.174060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.174133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.233 [2024-11-20 15:40:01.174144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:15.233 [2024-11-20 15:40:01.174155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.233 [2024-11-20 15:40:01.174164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.174215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.233 [2024-11-20 15:40:01.174226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:15.233 [2024-11-20 15:40:01.174236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.233 [2024-11-20 15:40:01.174246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.233 [2024-11-20 15:40:01.174264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.233 [2024-11-20 15:40:01.174278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:15.233 [2024-11-20 15:40:01.174288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.233 [2024-11-20 15:40:01.174298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.491 [2024-11-20 15:40:01.295773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.491 [2024-11-20 15:40:01.295832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:15.491 [2024-11-20 15:40:01.295847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.491 [2024-11-20 15:40:01.295874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.491 [2024-11-20 15:40:01.393879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.491 [2024-11-20 15:40:01.393935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:15.491 [2024-11-20 15:40:01.393949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.491 [2024-11-20 15:40:01.393960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.491 [2024-11-20 15:40:01.394029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.491 [2024-11-20 15:40:01.394040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.491 [2024-11-20 15:40:01.394050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.491 [2024-11-20 15:40:01.394060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.491 [2024-11-20 15:40:01.394086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.491 [2024-11-20 15:40:01.394096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.491 [2024-11-20 15:40:01.394112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.491 [2024-11-20 15:40:01.394121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.491 [2024-11-20 15:40:01.394225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.491 [2024-11-20 15:40:01.394238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.491 [2024-11-20 15:40:01.394248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.492 [2024-11-20 15:40:01.394257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.492 [2024-11-20 15:40:01.394295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.492 [2024-11-20 15:40:01.394307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:15.492 [2024-11-20 15:40:01.394317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.492 [2024-11-20 15:40:01.394331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.492 [2024-11-20 15:40:01.394370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.492 [2024-11-20 15:40:01.394381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.492 [2024-11-20 15:40:01.394391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.492 [2024-11-20 15:40:01.394400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.492 [2024-11-20 15:40:01.394442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:15.492 [2024-11-20 15:40:01.394459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:15.492 [2024-11-20 15:40:01.394472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:15.492 [2024-11-20 15:40:01.394481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.492 [2024-11-20 15:40:01.394666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.204 ms, result 0 00:28:16.888 00:28:16.888 00:28:16.888 15:40:02 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:17.147 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:17.147 15:40:02 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:17.147 15:40:02 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:17.147 15:40:02 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:17.147 15:40:02 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:17.147 15:40:02 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:17.147 15:40:03 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:17.147 15:40:03 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78929 00:28:17.147 Process with pid 78929 is not found 00:28:17.147 15:40:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78929 ']' 00:28:17.147 15:40:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78929 00:28:17.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78929) - No such process 00:28:17.147 15:40:03 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78929 is not found' 00:28:17.147 00:28:17.147 real 1m7.347s 00:28:17.147 user 1m36.135s 00:28:17.147 sys 0m6.997s 00:28:17.147 ************************************ 00:28:17.147 END TEST ftl_trim 00:28:17.147 ************************************ 00:28:17.147 15:40:03 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.147 15:40:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:17.406 15:40:03 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:17.406 15:40:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:17.406 15:40:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.406 15:40:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.406 ************************************ 00:28:17.406 START TEST ftl_restore 00:28:17.406 ************************************ 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:17.406 * Looking for test storage... 00:28:17.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.406 15:40:03 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.406 --rc genhtml_branch_coverage=1 00:28:17.406 --rc genhtml_function_coverage=1 00:28:17.406 --rc genhtml_legend=1 00:28:17.406 --rc geninfo_all_blocks=1 00:28:17.406 --rc geninfo_unexecuted_blocks=1 00:28:17.406 00:28:17.406 ' 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.406 --rc genhtml_branch_coverage=1 00:28:17.406 --rc genhtml_function_coverage=1 00:28:17.406 --rc genhtml_legend=1 00:28:17.406 --rc geninfo_all_blocks=1 00:28:17.406 --rc geninfo_unexecuted_blocks=1 00:28:17.406 00:28:17.406 ' 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.406 --rc genhtml_branch_coverage=1 00:28:17.406 --rc genhtml_function_coverage=1 00:28:17.406 --rc genhtml_legend=1 00:28:17.406 --rc geninfo_all_blocks=1 00:28:17.406 --rc geninfo_unexecuted_blocks=1 00:28:17.406 00:28:17.406 ' 00:28:17.406 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.406 --rc genhtml_branch_coverage=1 00:28:17.406 --rc genhtml_function_coverage=1 00:28:17.406 --rc genhtml_legend=1 00:28:17.406 --rc geninfo_all_blocks=1 00:28:17.406 --rc geninfo_unexecuted_blocks=1 00:28:17.406 00:28:17.406 ' 00:28:17.406 15:40:03 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:17.406 15:40:03 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:17.406 15:40:03 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.406 15:40:03 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.407 15:40:03 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.XUHwZAwmLA 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:17.666 15:40:03 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79192 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79192 00:28:17.667 15:40:03 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79192 ']' 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.667 15:40:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:17.667 [2024-11-20 15:40:03.516711] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:17.667 [2024-11-20 15:40:03.517689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79192 ] 00:28:17.926 [2024-11-20 15:40:03.721964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.185 [2024-11-20 15:40:03.904430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.124 15:40:04 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.124 15:40:04 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:19.124 15:40:04 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:19.383 15:40:05 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:19.383 15:40:05 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:19.383 15:40:05 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:19.383 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.383 { 00:28:19.383 "name": "nvme0n1", 00:28:19.383 "aliases": [ 00:28:19.383 "026eb5d6-a416-4f0d-a6a0-c9d3d084b7a8" 00:28:19.383 ], 00:28:19.383 "product_name": "NVMe disk", 00:28:19.383 "block_size": 4096, 00:28:19.383 "num_blocks": 1310720, 00:28:19.383 "uuid": "026eb5d6-a416-4f0d-a6a0-c9d3d084b7a8", 00:28:19.383 "numa_id": -1, 00:28:19.383 "assigned_rate_limits": { 00:28:19.383 "rw_ios_per_sec": 0, 00:28:19.383 "rw_mbytes_per_sec": 0, 00:28:19.383 "r_mbytes_per_sec": 0, 00:28:19.383 "w_mbytes_per_sec": 0 00:28:19.383 }, 00:28:19.383 "claimed": true, 00:28:19.383 "claim_type": "read_many_write_one", 00:28:19.383 "zoned": false, 00:28:19.383 "supported_io_types": { 00:28:19.383 "read": true, 00:28:19.383 "write": true, 00:28:19.383 "unmap": true, 00:28:19.383 "flush": true, 00:28:19.383 "reset": true, 00:28:19.383 "nvme_admin": true, 00:28:19.383 "nvme_io": true, 00:28:19.383 "nvme_io_md": false, 00:28:19.383 "write_zeroes": true, 00:28:19.383 "zcopy": false, 00:28:19.383 "get_zone_info": false, 00:28:19.383 "zone_management": false, 00:28:19.383 "zone_append": false, 00:28:19.383 "compare": true, 00:28:19.383 "compare_and_write": false, 00:28:19.383 "abort": true, 00:28:19.383 "seek_hole": false, 00:28:19.383 "seek_data": false, 00:28:19.383 "copy": true, 00:28:19.383 "nvme_iov_md": false 00:28:19.383 }, 00:28:19.383 "driver_specific": { 00:28:19.383 "nvme": [ 00:28:19.383 { 00:28:19.383 "pci_address": "0000:00:11.0", 00:28:19.383 "trid": { 00:28:19.383 "trtype": "PCIe", 00:28:19.383 "traddr": "0000:00:11.0" 00:28:19.383 }, 00:28:19.383 "ctrlr_data": { 00:28:19.383 "cntlid": 0, 00:28:19.383 "vendor_id": "0x1b36", 00:28:19.383 "model_number": "QEMU NVMe Ctrl", 00:28:19.383 "serial_number": "12341", 00:28:19.383 "firmware_revision": "8.0.0", 00:28:19.383 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:19.383 "oacs": { 00:28:19.383 "security": 0, 00:28:19.383 "format": 1, 00:28:19.383 "firmware": 0, 00:28:19.383 "ns_manage": 1 00:28:19.383 }, 00:28:19.383 "multi_ctrlr": false, 00:28:19.383 "ana_reporting": false 00:28:19.383 }, 00:28:19.383 "vs": { 00:28:19.383 "nvme_version": "1.4" 00:28:19.383 }, 00:28:19.383 "ns_data": { 00:28:19.383 "id": 1, 00:28:19.383 "can_share": false 00:28:19.383 } 00:28:19.383 } 00:28:19.383 ], 00:28:19.383 "mp_policy": "active_passive" 00:28:19.383 } 00:28:19.383 } 00:28:19.383 ]' 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:19.642 15:40:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:28:19.642 15:40:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:19.642 15:40:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:19.642 15:40:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:19.642 15:40:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:19.642 15:40:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:19.900 15:40:05 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f4ef6bca-a58c-4507-8a1f-235b505d568d 00:28:19.900 15:40:05 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:19.900 15:40:05 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4ef6bca-a58c-4507-8a1f-235b505d568d 00:28:20.159 15:40:05 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:20.159 15:40:06 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0262fcc6-29c8-4754-8911-55baa9a631f4 00:28:20.159 15:40:06 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0262fcc6-29c8-4754-8911-55baa9a631f4 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:20.418 15:40:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.418 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.418 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:20.418 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:20.418 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:20.418 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:20.677 { 00:28:20.677 "name": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:20.677 "aliases": [ 00:28:20.677 "lvs/nvme0n1p0" 00:28:20.677 ], 00:28:20.677 "product_name": "Logical Volume", 00:28:20.677 "block_size": 4096, 00:28:20.677 "num_blocks": 26476544, 00:28:20.677 "uuid": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:20.677 "assigned_rate_limits": { 00:28:20.677 "rw_ios_per_sec": 0, 00:28:20.677 "rw_mbytes_per_sec": 0, 00:28:20.677 "r_mbytes_per_sec": 0, 00:28:20.677 "w_mbytes_per_sec": 0 00:28:20.677 }, 00:28:20.677 "claimed": false, 00:28:20.677 "zoned": false, 00:28:20.677 "supported_io_types": { 00:28:20.677 "read": true, 00:28:20.677 "write": true, 00:28:20.677 "unmap": true, 00:28:20.677 "flush": false, 00:28:20.677 "reset": true, 00:28:20.677 "nvme_admin": false, 00:28:20.677 "nvme_io": false, 00:28:20.677 "nvme_io_md": false, 00:28:20.677 "write_zeroes": true, 00:28:20.677 "zcopy": false, 00:28:20.677 "get_zone_info": false, 00:28:20.677 "zone_management": false, 00:28:20.677 "zone_append": false, 00:28:20.677 "compare": false, 00:28:20.677 "compare_and_write": false, 00:28:20.677 "abort": false, 00:28:20.677 "seek_hole": true, 00:28:20.677 "seek_data": true, 00:28:20.677 "copy": false, 00:28:20.677 "nvme_iov_md": false 00:28:20.677 }, 00:28:20.677 "driver_specific": { 00:28:20.677 "lvol": { 00:28:20.677 "lvol_store_uuid": "0262fcc6-29c8-4754-8911-55baa9a631f4", 00:28:20.677 "base_bdev": "nvme0n1", 00:28:20.677 "thin_provision": true, 00:28:20.677 "num_allocated_clusters": 0, 00:28:20.677 "snapshot": false, 00:28:20.677 "clone": false, 00:28:20.677 "esnap_clone": false 00:28:20.677 } 00:28:20.677 } 00:28:20.677 } 00:28:20.677 ]' 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:20.677 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:20.677 15:40:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:20.677 15:40:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:20.677 15:40:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:21.246 15:40:06 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:21.246 15:40:06 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:21.246 15:40:06 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.246 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.246 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:21.246 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:21.246 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:21.246 15:40:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:21.246 { 00:28:21.246 "name": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:21.246 "aliases": [ 00:28:21.246 "lvs/nvme0n1p0" 00:28:21.246 ], 00:28:21.246 "product_name": "Logical Volume", 00:28:21.246 "block_size": 4096, 00:28:21.246 "num_blocks": 26476544, 00:28:21.246 "uuid": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:21.246 "assigned_rate_limits": { 00:28:21.246 "rw_ios_per_sec": 0, 00:28:21.246 "rw_mbytes_per_sec": 0, 00:28:21.246 "r_mbytes_per_sec": 0, 00:28:21.246 "w_mbytes_per_sec": 0 00:28:21.246 }, 00:28:21.246 "claimed": false, 00:28:21.246 "zoned": false, 00:28:21.246 "supported_io_types": { 00:28:21.246 "read": true, 00:28:21.246 "write": true, 00:28:21.246 "unmap": true, 00:28:21.246 "flush": false, 00:28:21.246 "reset": true, 00:28:21.246 "nvme_admin": false, 00:28:21.246 "nvme_io": false, 00:28:21.246 "nvme_io_md": false, 00:28:21.246 "write_zeroes": true, 00:28:21.246 "zcopy": false, 00:28:21.246 "get_zone_info": false, 00:28:21.246 "zone_management": false, 00:28:21.246 "zone_append": false, 00:28:21.246 "compare": false, 00:28:21.246 "compare_and_write": false, 00:28:21.246 "abort": false, 00:28:21.246 "seek_hole": true, 00:28:21.246 "seek_data": true, 00:28:21.246 "copy": false, 00:28:21.246 "nvme_iov_md": false 00:28:21.246 }, 00:28:21.246 "driver_specific": { 00:28:21.246 "lvol": { 00:28:21.246 "lvol_store_uuid": "0262fcc6-29c8-4754-8911-55baa9a631f4", 00:28:21.246 "base_bdev": "nvme0n1", 00:28:21.246 "thin_provision": true, 00:28:21.246 "num_allocated_clusters": 0, 00:28:21.246 "snapshot": false, 00:28:21.246 "clone": false, 00:28:21.246 "esnap_clone": false 00:28:21.246 } 00:28:21.246 } 00:28:21.246 } 00:28:21.246 ]' 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:21.246 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:21.246 15:40:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:21.246 15:40:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:21.505 15:40:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:21.505 15:40:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.505 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.505 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:21.505 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:21.505 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:21.505 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3897739f-4f2b-49a7-a46e-19bf16fd0c64 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:21.764 { 00:28:21.764 "name": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:21.764 "aliases": [ 00:28:21.764 "lvs/nvme0n1p0" 00:28:21.764 ], 00:28:21.764 "product_name": "Logical Volume", 00:28:21.764 "block_size": 4096, 00:28:21.764 "num_blocks": 26476544, 00:28:21.764 "uuid": "3897739f-4f2b-49a7-a46e-19bf16fd0c64", 00:28:21.764 "assigned_rate_limits": { 00:28:21.764 "rw_ios_per_sec": 0, 00:28:21.764 "rw_mbytes_per_sec": 0, 00:28:21.764 "r_mbytes_per_sec": 0, 00:28:21.764 "w_mbytes_per_sec": 0 00:28:21.764 }, 00:28:21.764 "claimed": false, 00:28:21.764 "zoned": false, 00:28:21.764 "supported_io_types": { 00:28:21.764 "read": true, 00:28:21.764 "write": true, 00:28:21.764 "unmap": true, 00:28:21.764 "flush": false, 00:28:21.764 "reset": true, 00:28:21.764 "nvme_admin": false, 00:28:21.764 "nvme_io": false, 00:28:21.764 "nvme_io_md": false, 00:28:21.764 "write_zeroes": true, 00:28:21.764 "zcopy": false, 00:28:21.764 "get_zone_info": false, 00:28:21.764 "zone_management": false, 00:28:21.764 "zone_append": false, 00:28:21.764 "compare": false, 00:28:21.764 "compare_and_write": false, 00:28:21.764 "abort": false, 00:28:21.764 "seek_hole": true, 00:28:21.764 "seek_data": true, 00:28:21.764 "copy": false, 00:28:21.764 "nvme_iov_md": false 00:28:21.764 }, 00:28:21.764 "driver_specific": { 00:28:21.764 "lvol": { 00:28:21.764 "lvol_store_uuid": "0262fcc6-29c8-4754-8911-55baa9a631f4", 00:28:21.764 "base_bdev": "nvme0n1", 00:28:21.764 "thin_provision": true, 00:28:21.764 "num_allocated_clusters": 0, 00:28:21.764 "snapshot": false, 00:28:21.764 "clone": false, 00:28:21.764 "esnap_clone": false 00:28:21.764 } 00:28:21.764 } 00:28:21.764 } 00:28:21.764 ]' 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:21.764 15:40:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3897739f-4f2b-49a7-a46e-19bf16fd0c64 --l2p_dram_limit 10' 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:21.764 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:21.764 15:40:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3897739f-4f2b-49a7-a46e-19bf16fd0c64 --l2p_dram_limit 10 -c nvc0n1p0 00:28:22.024 [2024-11-20 15:40:07.884460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.884530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:22.024 [2024-11-20 15:40:07.884567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:22.024 [2024-11-20 15:40:07.884578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.884687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.884701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.024 [2024-11-20 15:40:07.884715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:22.024 [2024-11-20 15:40:07.884726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.884751] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:22.024 [2024-11-20 15:40:07.885821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:22.024 [2024-11-20 15:40:07.885856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.885868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.024 [2024-11-20 15:40:07.885881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:28:22.024 [2024-11-20 15:40:07.885892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.885979] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:28:22.024 [2024-11-20 15:40:07.887469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.887499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:22.024 [2024-11-20 15:40:07.887511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:22.024 [2024-11-20 15:40:07.887524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.895075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.895114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.024 [2024-11-20 15:40:07.895127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.487 ms 00:28:22.024 [2024-11-20 15:40:07.895139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.895240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.895257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.024 [2024-11-20 15:40:07.895268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:22.024 [2024-11-20 15:40:07.895285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.895347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.024 [2024-11-20 15:40:07.895362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:22.024 [2024-11-20 15:40:07.895373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:22.024 [2024-11-20 15:40:07.895389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.024 [2024-11-20 15:40:07.895414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:22.025 [2024-11-20 15:40:07.900283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.025 [2024-11-20 15:40:07.900319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.025 [2024-11-20 15:40:07.900335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:28:22.025 [2024-11-20 15:40:07.900361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.025 [2024-11-20 15:40:07.900400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.025 [2024-11-20 15:40:07.900411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:22.025 [2024-11-20 15:40:07.900423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:22.025 [2024-11-20 15:40:07.900433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.025 [2024-11-20 15:40:07.900481] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:22.025 [2024-11-20 15:40:07.900623] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:22.025 [2024-11-20 15:40:07.900644] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:22.025 [2024-11-20 15:40:07.900658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:22.025 [2024-11-20 15:40:07.900674] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:22.025 [2024-11-20 15:40:07.900686] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:22.025 [2024-11-20 15:40:07.900699] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:22.025 [2024-11-20 15:40:07.900709] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:22.025 [2024-11-20 15:40:07.900725] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:22.025 [2024-11-20 15:40:07.900734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:22.025 [2024-11-20 15:40:07.900748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.025 [2024-11-20 15:40:07.900757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:22.025 [2024-11-20 15:40:07.900770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:28:22.025 [2024-11-20 15:40:07.900790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.025 [2024-11-20 15:40:07.900874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.025 [2024-11-20 15:40:07.900885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:22.025 [2024-11-20 15:40:07.900897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:22.025 [2024-11-20 15:40:07.900907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.025 [2024-11-20 15:40:07.901026] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:22.025 [2024-11-20 15:40:07.901040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:22.025 [2024-11-20 15:40:07.901052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:22.025 [2024-11-20 15:40:07.901085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:22.025 [2024-11-20 15:40:07.901119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.025 [2024-11-20 15:40:07.901140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:22.025 [2024-11-20 15:40:07.901150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:22.025 [2024-11-20 15:40:07.901162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.025 [2024-11-20 15:40:07.901171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:22.025 [2024-11-20 15:40:07.901183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:22.025 [2024-11-20 15:40:07.901192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:22.025 [2024-11-20 15:40:07.901217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:22.025 [2024-11-20 15:40:07.901251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:22.025 [2024-11-20 15:40:07.901281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:22.025 [2024-11-20 15:40:07.901313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:22.025 [2024-11-20 15:40:07.901343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:22.025 [2024-11-20 15:40:07.901377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.025 [2024-11-20 15:40:07.901399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:22.025 [2024-11-20 15:40:07.901408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:22.025 [2024-11-20 15:40:07.901419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.025 [2024-11-20 15:40:07.901429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:22.025 [2024-11-20 15:40:07.901440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:22.025 [2024-11-20 15:40:07.901449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:22.025 [2024-11-20 15:40:07.901470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:22.025 [2024-11-20 15:40:07.901481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901490] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:22.025 [2024-11-20 15:40:07.901503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:22.025 [2024-11-20 15:40:07.901513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.025 [2024-11-20 15:40:07.901537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:22.025 [2024-11-20 15:40:07.901551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:22.025 [2024-11-20 15:40:07.901561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:22.025 [2024-11-20 15:40:07.901573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:22.025 [2024-11-20 15:40:07.901593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:22.025 [2024-11-20 15:40:07.901605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:22.025 [2024-11-20 15:40:07.901619] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:22.025 [2024-11-20 15:40:07.901634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.025 [2024-11-20 15:40:07.901649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:22.025 [2024-11-20 15:40:07.901662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:22.025 [2024-11-20 15:40:07.901672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:22.026 [2024-11-20 15:40:07.901685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:22.026 [2024-11-20 15:40:07.901696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:22.026 [2024-11-20 15:40:07.901709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:22.026 [2024-11-20 15:40:07.901719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:22.026 [2024-11-20 15:40:07.901731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:22.026 [2024-11-20 15:40:07.901741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:22.026 [2024-11-20 15:40:07.901757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:22.026 [2024-11-20 15:40:07.901816] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:22.026 [2024-11-20 15:40:07.901830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:22.026 [2024-11-20 15:40:07.901855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:22.026 [2024-11-20 15:40:07.901866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:22.026 [2024-11-20 15:40:07.901879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:22.026 [2024-11-20 15:40:07.901890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.026 [2024-11-20 15:40:07.901903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:22.026 [2024-11-20 15:40:07.901913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:28:22.026 [2024-11-20 15:40:07.901925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.026 [2024-11-20 15:40:07.901967] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:22.026 [2024-11-20 15:40:07.901985] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:24.561 [2024-11-20 15:40:10.449712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.561 [2024-11-20 15:40:10.449783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:24.561 [2024-11-20 15:40:10.449801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2547.727 ms 00:28:24.561 [2024-11-20 15:40:10.449815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.561 [2024-11-20 15:40:10.487763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.561 [2024-11-20 15:40:10.487820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.561 [2024-11-20 15:40:10.487835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.662 ms 00:28:24.561 [2024-11-20 15:40:10.487849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.561 [2024-11-20 15:40:10.487988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.561 [2024-11-20 15:40:10.488005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.561 [2024-11-20 15:40:10.488016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:24.561 [2024-11-20 15:40:10.488035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.533286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.534867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.820 [2024-11-20 15:40:10.534895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.190 ms 00:28:24.820 [2024-11-20 15:40:10.534909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.534954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.534974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.820 [2024-11-20 15:40:10.534985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:24.820 [2024-11-20 15:40:10.534998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.535485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.535504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.820 [2024-11-20 15:40:10.535515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:28:24.820 [2024-11-20 15:40:10.535528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.535639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.535654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.820 [2024-11-20 15:40:10.535668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:24.820 [2024-11-20 15:40:10.535684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.555262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.555310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.820 [2024-11-20 15:40:10.555324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.556 ms 00:28:24.820 [2024-11-20 15:40:10.555337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.578900] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:24.820 [2024-11-20 15:40:10.582399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.582586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:24.820 [2024-11-20 15:40:10.582620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.962 ms 00:28:24.820 [2024-11-20 15:40:10.582634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.655437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.655727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:24.820 [2024-11-20 15:40:10.655760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.752 ms 00:28:24.820 [2024-11-20 15:40:10.655771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.655966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.655983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:24.820 [2024-11-20 15:40:10.656001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:28:24.820 [2024-11-20 15:40:10.656011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.691999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.692037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:24.820 [2024-11-20 15:40:10.692054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.928 ms 00:28:24.820 [2024-11-20 15:40:10.692064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.726852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.727034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:24.820 [2024-11-20 15:40:10.727061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.740 ms 00:28:24.820 [2024-11-20 15:40:10.727072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-11-20 15:40:10.727819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-11-20 15:40:10.727843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:24.820 [2024-11-20 15:40:10.727857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:28:24.820 [2024-11-20 15:40:10.727871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.822952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.823174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:25.080 [2024-11-20 15:40:10.823207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.020 ms 00:28:25.080 [2024-11-20 15:40:10.823218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.859842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.860016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:25.080 [2024-11-20 15:40:10.860042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.537 ms 00:28:25.080 [2024-11-20 15:40:10.860054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.895206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.895351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:25.080 [2024-11-20 15:40:10.895392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.056 ms 00:28:25.080 [2024-11-20 15:40:10.895403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.931533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.931578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.080 [2024-11-20 15:40:10.931612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.049 ms 00:28:25.080 [2024-11-20 15:40:10.931622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.931669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.931700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.080 [2024-11-20 15:40:10.931717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.080 [2024-11-20 15:40:10.931728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.931830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.080 [2024-11-20 15:40:10.931844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.080 [2024-11-20 15:40:10.931861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:25.080 [2024-11-20 15:40:10.931870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.080 [2024-11-20 15:40:10.932934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3047.972 ms, result 0 00:28:25.080 { 00:28:25.080 "name": "ftl0", 00:28:25.080 "uuid": "0c8d862d-b117-4ac0-b4e8-e65fd9e66655" 00:28:25.080 } 00:28:25.080 15:40:10 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:25.080 15:40:10 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:25.340 15:40:11 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:28:25.340 15:40:11 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:25.600 [2024-11-20 15:40:11.536397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.600 [2024-11-20 15:40:11.536454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:25.600 [2024-11-20 15:40:11.536486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:25.600 [2024-11-20 15:40:11.536507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.600 [2024-11-20 15:40:11.536535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:25.600 [2024-11-20 15:40:11.540719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.600 [2024-11-20 15:40:11.540753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:25.600 [2024-11-20 15:40:11.540769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:28:25.600 [2024-11-20 15:40:11.540779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.600 [2024-11-20 15:40:11.541018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.600 [2024-11-20 15:40:11.541033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:25.600 [2024-11-20 15:40:11.541046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:28:25.600 [2024-11-20 15:40:11.541056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.600 [2024-11-20 15:40:11.543667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.600 [2024-11-20 15:40:11.543690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:25.600 [2024-11-20 15:40:11.543704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:28:25.600 [2024-11-20 15:40:11.543725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.600 [2024-11-20 15:40:11.548789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.600 [2024-11-20 15:40:11.548820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:25.600 [2024-11-20 15:40:11.548838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:28:25.600 [2024-11-20 15:40:11.548847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.859 [2024-11-20 15:40:11.585366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.859 [2024-11-20 15:40:11.585403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:25.859 [2024-11-20 15:40:11.585420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.447 ms 00:28:25.859 [2024-11-20 15:40:11.585429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.859 [2024-11-20 15:40:11.606968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.859 [2024-11-20 15:40:11.607005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:25.860 [2024-11-20 15:40:11.607021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.491 ms 00:28:25.860 [2024-11-20 15:40:11.607032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.607181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.860 [2024-11-20 15:40:11.607194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:25.860 [2024-11-20 15:40:11.607208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:25.860 [2024-11-20 15:40:11.607218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.643453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.860 [2024-11-20 15:40:11.643488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:25.860 [2024-11-20 15:40:11.643504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.210 ms 00:28:25.860 [2024-11-20 15:40:11.643529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.679410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.860 [2024-11-20 15:40:11.679593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:25.860 [2024-11-20 15:40:11.679619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.836 ms 00:28:25.860 [2024-11-20 15:40:11.679629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.714552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.860 [2024-11-20 15:40:11.714607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:25.860 [2024-11-20 15:40:11.714622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.855 ms 00:28:25.860 [2024-11-20 15:40:11.714648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.750087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.860 [2024-11-20 15:40:11.750122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:25.860 [2024-11-20 15:40:11.750137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.344 ms 00:28:25.860 [2024-11-20 15:40:11.750163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.860 [2024-11-20 15:40:11.750204] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:25.860 [2024-11-20 15:40:11.750220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.750996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:25.860 [2024-11-20 15:40:11.751104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:25.861 [2024-11-20 15:40:11.751487] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:25.861 [2024-11-20 15:40:11.751503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:28:25.861 [2024-11-20 15:40:11.751513] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:25.861 [2024-11-20 15:40:11.751528] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:25.861 [2024-11-20 15:40:11.751538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:25.861 [2024-11-20 15:40:11.751554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:25.861 [2024-11-20 15:40:11.751564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:25.861 [2024-11-20 15:40:11.751597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:25.861 [2024-11-20 15:40:11.751608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:25.861 [2024-11-20 15:40:11.751623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:25.861 [2024-11-20 15:40:11.751632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:25.861 [2024-11-20 15:40:11.751647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.861 [2024-11-20 15:40:11.751659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:25.861 [2024-11-20 15:40:11.751675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:28:25.861 [2024-11-20 15:40:11.751697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.861 [2024-11-20 15:40:11.771692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.861 [2024-11-20 15:40:11.771843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:25.861 [2024-11-20 15:40:11.771884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.937 ms 00:28:25.861 [2024-11-20 15:40:11.771894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.861 [2024-11-20 15:40:11.772433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.861 [2024-11-20 15:40:11.772447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:25.861 [2024-11-20 15:40:11.772464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:28:25.861 [2024-11-20 15:40:11.772474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:11.837272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:11.837311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:26.120 [2024-11-20 15:40:11.837326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:11.837353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:11.837412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:11.837423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:26.120 [2024-11-20 15:40:11.837440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:11.837450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:11.837569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:11.837582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:26.120 [2024-11-20 15:40:11.837622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:11.837649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:11.837675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:11.837686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:26.120 [2024-11-20 15:40:11.837699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:11.837709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:11.961415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:11.961473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:26.120 [2024-11-20 15:40:11.961491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:11.961501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.059871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.059925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:26.120 [2024-11-20 15:40:12.059958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.059972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.120 [2024-11-20 15:40:12.060118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.120 [2024-11-20 15:40:12.060207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.120 [2024-11-20 15:40:12.060381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:26.120 [2024-11-20 15:40:12.060465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.120 [2024-11-20 15:40:12.060544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.120 [2024-11-20 15:40:12.060655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.120 [2024-11-20 15:40:12.060668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.120 [2024-11-20 15:40:12.060678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.120 [2024-11-20 15:40:12.060828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.375 ms, result 0 00:28:26.120 true 00:28:26.380 15:40:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79192 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79192 ']' 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79192 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79192 00:28:26.380 killing process with pid 79192 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79192' 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79192 00:28:26.380 15:40:12 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79192 00:28:31.652 15:40:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:35.837 262144+0 records in 00:28:35.837 262144+0 records out 00:28:35.837 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.42804 s, 242 MB/s 00:28:35.837 15:40:21 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:37.739 15:40:23 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:37.739 [2024-11-20 15:40:23.655025] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:37.739 [2024-11-20 15:40:23.655152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79431 ] 00:28:37.997 [2024-11-20 15:40:23.845385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.257 [2024-11-20 15:40:24.007087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.516 [2024-11-20 15:40:24.398426] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.516 [2024-11-20 15:40:24.398499] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.776 [2024-11-20 15:40:24.567515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.567715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:38.776 [2024-11-20 15:40:24.567752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:38.776 [2024-11-20 15:40:24.567764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.567832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.567844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.776 [2024-11-20 15:40:24.567862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:38.776 [2024-11-20 15:40:24.567872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.567895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:38.776 [2024-11-20 15:40:24.568860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:38.776 [2024-11-20 15:40:24.568886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.568897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.776 [2024-11-20 15:40:24.568908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:28:38.776 [2024-11-20 15:40:24.568919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.570386] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:38.776 [2024-11-20 15:40:24.590750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.590916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:38.776 [2024-11-20 15:40:24.590940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.363 ms 00:28:38.776 [2024-11-20 15:40:24.590952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.591045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.591059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:38.776 [2024-11-20 15:40:24.591071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:38.776 [2024-11-20 15:40:24.591081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.598039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.598202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.776 [2024-11-20 15:40:24.598222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:28:38.776 [2024-11-20 15:40:24.598241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.598340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.598353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.776 [2024-11-20 15:40:24.598365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:38.776 [2024-11-20 15:40:24.598374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.598421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.598433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:38.776 [2024-11-20 15:40:24.598444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:38.776 [2024-11-20 15:40:24.598454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.598485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:38.776 [2024-11-20 15:40:24.603329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.603365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.776 [2024-11-20 15:40:24.603378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:28:38.776 [2024-11-20 15:40:24.603392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.603425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.603436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:38.776 [2024-11-20 15:40:24.603447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:38.776 [2024-11-20 15:40:24.603457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.603512] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:38.776 [2024-11-20 15:40:24.603540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:38.776 [2024-11-20 15:40:24.603589] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:38.776 [2024-11-20 15:40:24.603613] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:38.776 [2024-11-20 15:40:24.603721] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:38.776 [2024-11-20 15:40:24.603736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:38.776 [2024-11-20 15:40:24.603750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:38.776 [2024-11-20 15:40:24.603764] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:38.776 [2024-11-20 15:40:24.603788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:38.776 [2024-11-20 15:40:24.603800] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:38.776 [2024-11-20 15:40:24.603809] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:38.776 [2024-11-20 15:40:24.603819] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:38.776 [2024-11-20 15:40:24.603832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:38.776 [2024-11-20 15:40:24.603843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.603853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:38.776 [2024-11-20 15:40:24.603864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:28:38.776 [2024-11-20 15:40:24.603874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.603947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.776 [2024-11-20 15:40:24.603973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:38.776 [2024-11-20 15:40:24.603985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:38.776 [2024-11-20 15:40:24.603996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.776 [2024-11-20 15:40:24.604110] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:38.776 [2024-11-20 15:40:24.604125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:38.776 [2024-11-20 15:40:24.604136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.776 [2024-11-20 15:40:24.604147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.776 [2024-11-20 15:40:24.604157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:38.776 [2024-11-20 15:40:24.604166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:38.776 [2024-11-20 15:40:24.604176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:38.776 [2024-11-20 15:40:24.604185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:38.776 [2024-11-20 15:40:24.604194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:38.776 [2024-11-20 15:40:24.604204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.776 [2024-11-20 15:40:24.604213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:38.776 [2024-11-20 15:40:24.604223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:38.777 [2024-11-20 15:40:24.604232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.777 [2024-11-20 15:40:24.604241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:38.777 [2024-11-20 15:40:24.604251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:38.777 [2024-11-20 15:40:24.604270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:38.777 [2024-11-20 15:40:24.604289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:38.777 [2024-11-20 15:40:24.604318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:38.777 [2024-11-20 15:40:24.604346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:38.777 [2024-11-20 15:40:24.604373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:38.777 [2024-11-20 15:40:24.604401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:38.777 [2024-11-20 15:40:24.604428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.777 [2024-11-20 15:40:24.604446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:38.777 [2024-11-20 15:40:24.604455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:38.777 [2024-11-20 15:40:24.604464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.777 [2024-11-20 15:40:24.604473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:38.777 [2024-11-20 15:40:24.604482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:38.777 [2024-11-20 15:40:24.604491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:38.777 [2024-11-20 15:40:24.604509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:38.777 [2024-11-20 15:40:24.604518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604527] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:38.777 [2024-11-20 15:40:24.604536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:38.777 [2024-11-20 15:40:24.604546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.777 [2024-11-20 15:40:24.604567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:38.777 [2024-11-20 15:40:24.604576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:38.777 [2024-11-20 15:40:24.604585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:38.777 [2024-11-20 15:40:24.604594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:38.777 [2024-11-20 15:40:24.604603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:38.777 [2024-11-20 15:40:24.604613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:38.777 [2024-11-20 15:40:24.604638] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:38.777 [2024-11-20 15:40:24.604651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:38.777 [2024-11-20 15:40:24.604672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:38.777 [2024-11-20 15:40:24.604683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:38.777 [2024-11-20 15:40:24.604693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:38.777 [2024-11-20 15:40:24.604703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:38.777 [2024-11-20 15:40:24.604714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:38.777 [2024-11-20 15:40:24.604847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:38.777 [2024-11-20 15:40:24.604857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:38.777 [2024-11-20 15:40:24.604868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:38.777 [2024-11-20 15:40:24.604878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:38.777 [2024-11-20 15:40:24.604929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:38.777 [2024-11-20 15:40:24.604945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:38.777 [2024-11-20 15:40:24.604966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:38.777 [2024-11-20 15:40:24.604976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:38.777 [2024-11-20 15:40:24.604987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:38.777 [2024-11-20 15:40:24.604998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.605008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:38.777 [2024-11-20 15:40:24.605018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:28:38.777 [2024-11-20 15:40:24.605030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.646374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.646431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.777 [2024-11-20 15:40:24.646447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.294 ms 00:28:38.777 [2024-11-20 15:40:24.646458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.646561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.646595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:38.777 [2024-11-20 15:40:24.646607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:38.777 [2024-11-20 15:40:24.646632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.707436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.707487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.777 [2024-11-20 15:40:24.707504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.714 ms 00:28:38.777 [2024-11-20 15:40:24.707515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.707588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.707601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.777 [2024-11-20 15:40:24.707618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:38.777 [2024-11-20 15:40:24.707628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.708154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.708176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.777 [2024-11-20 15:40:24.708189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:28:38.777 [2024-11-20 15:40:24.708200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.708329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.708344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.777 [2024-11-20 15:40:24.708355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:38.777 [2024-11-20 15:40:24.708374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.777 [2024-11-20 15:40:24.728409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.777 [2024-11-20 15:40:24.728615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.777 [2024-11-20 15:40:24.728654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.012 ms 00:28:38.777 [2024-11-20 15:40:24.728665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.749128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:39.037 [2024-11-20 15:40:24.749170] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:39.037 [2024-11-20 15:40:24.749186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.749197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:39.037 [2024-11-20 15:40:24.749209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.379 ms 00:28:39.037 [2024-11-20 15:40:24.749219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.779852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.779906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:39.037 [2024-11-20 15:40:24.779920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.589 ms 00:28:39.037 [2024-11-20 15:40:24.779931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.799234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.799402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:39.037 [2024-11-20 15:40:24.799423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.257 ms 00:28:39.037 [2024-11-20 15:40:24.799434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.818469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.818508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:39.037 [2024-11-20 15:40:24.818522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.973 ms 00:28:39.037 [2024-11-20 15:40:24.818532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.819375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.819408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:39.037 [2024-11-20 15:40:24.819422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:28:39.037 [2024-11-20 15:40:24.819432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.909641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.909705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:39.037 [2024-11-20 15:40:24.909722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.177 ms 00:28:39.037 [2024-11-20 15:40:24.909744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.921083] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:39.037 [2024-11-20 15:40:24.924303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.924336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:39.037 [2024-11-20 15:40:24.924352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:28:39.037 [2024-11-20 15:40:24.924362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.924473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.924487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:39.037 [2024-11-20 15:40:24.924500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:39.037 [2024-11-20 15:40:24.924510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.924625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.924639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:39.037 [2024-11-20 15:40:24.924651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:39.037 [2024-11-20 15:40:24.924662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.924688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.924699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:39.037 [2024-11-20 15:40:24.924710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:39.037 [2024-11-20 15:40:24.924721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.924759] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:39.037 [2024-11-20 15:40:24.924772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.924788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:39.037 [2024-11-20 15:40:24.924799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:39.037 [2024-11-20 15:40:24.924809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.963130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.963296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:39.037 [2024-11-20 15:40:24.963320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.297 ms 00:28:39.037 [2024-11-20 15:40:24.963330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.963487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.037 [2024-11-20 15:40:24.963502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:39.037 [2024-11-20 15:40:24.963514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:39.037 [2024-11-20 15:40:24.963524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.037 [2024-11-20 15:40:24.964780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.697 ms, result 0 00:28:40.414  [2024-11-20T15:40:27.311Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-20T15:40:28.251Z] Copying: 62/1024 [MB] (31 MBps) [2024-11-20T15:40:29.188Z] Copying: 94/1024 [MB] (32 MBps) [2024-11-20T15:40:30.125Z] Copying: 126/1024 [MB] (31 MBps) [2024-11-20T15:40:31.106Z] Copying: 155/1024 [MB] (29 MBps) [2024-11-20T15:40:32.044Z] Copying: 185/1024 [MB] (29 MBps) [2024-11-20T15:40:32.981Z] Copying: 216/1024 [MB] (30 MBps) [2024-11-20T15:40:34.358Z] Copying: 247/1024 [MB] (31 MBps) [2024-11-20T15:40:35.295Z] Copying: 278/1024 [MB] (31 MBps) [2024-11-20T15:40:36.232Z] Copying: 309/1024 [MB] (30 MBps) [2024-11-20T15:40:37.166Z] Copying: 340/1024 [MB] (31 MBps) [2024-11-20T15:40:38.102Z] Copying: 371/1024 [MB] (30 MBps) [2024-11-20T15:40:39.038Z] Copying: 402/1024 [MB] (31 MBps) [2024-11-20T15:40:40.075Z] Copying: 433/1024 [MB] (30 MBps) [2024-11-20T15:40:41.011Z] Copying: 464/1024 [MB] (30 MBps) [2024-11-20T15:40:42.390Z] Copying: 495/1024 [MB] (31 MBps) [2024-11-20T15:40:43.327Z] Copying: 527/1024 [MB] (31 MBps) [2024-11-20T15:40:44.264Z] Copying: 558/1024 [MB] (31 MBps) [2024-11-20T15:40:45.201Z] Copying: 589/1024 [MB] (30 MBps) [2024-11-20T15:40:46.138Z] Copying: 621/1024 [MB] (31 MBps) [2024-11-20T15:40:47.076Z] Copying: 652/1024 [MB] (31 MBps) [2024-11-20T15:40:48.016Z] Copying: 684/1024 [MB] (31 MBps) [2024-11-20T15:40:49.391Z] Copying: 716/1024 [MB] (31 MBps) [2024-11-20T15:40:50.328Z] Copying: 747/1024 [MB] (31 MBps) [2024-11-20T15:40:51.266Z] Copying: 779/1024 [MB] (31 MBps) [2024-11-20T15:40:52.202Z] Copying: 810/1024 [MB] (31 MBps) [2024-11-20T15:40:53.139Z] Copying: 841/1024 [MB] (31 MBps) [2024-11-20T15:40:54.076Z] Copying: 872/1024 [MB] (30 MBps) [2024-11-20T15:40:55.013Z] Copying: 904/1024 [MB] (31 MBps) [2024-11-20T15:40:56.390Z] Copying: 934/1024 [MB] (30 MBps) [2024-11-20T15:40:57.326Z] Copying: 965/1024 [MB] (30 MBps) [2024-11-20T15:40:58.267Z] Copying: 994/1024 [MB] (29 MBps) [2024-11-20T15:40:58.267Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 15:40:57.947865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:57.947910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:12.309 [2024-11-20 15:40:57.947927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:12.309 [2024-11-20 15:40:57.947937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:57.947960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:12.309 [2024-11-20 15:40:57.952421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:57.952453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:12.309 [2024-11-20 15:40:57.952466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:29:12.309 [2024-11-20 15:40:57.952490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:57.954244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:57.954279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:12.309 [2024-11-20 15:40:57.954292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:29:12.309 [2024-11-20 15:40:57.954302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:57.971608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:57.971813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:12.309 [2024-11-20 15:40:57.971901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:29:12.309 [2024-11-20 15:40:57.971940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:57.977276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:57.977428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:12.309 [2024-11-20 15:40:57.977449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:29:12.309 [2024-11-20 15:40:57.977460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:58.017765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:58.018161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:12.309 [2024-11-20 15:40:58.018278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.221 ms 00:29:12.309 [2024-11-20 15:40:58.018320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:58.044055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.309 [2024-11-20 15:40:58.044458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:12.309 [2024-11-20 15:40:58.044663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:29:12.309 [2024-11-20 15:40:58.044712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.309 [2024-11-20 15:40:58.045100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.310 [2024-11-20 15:40:58.045252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:12.310 [2024-11-20 15:40:58.045417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:29:12.310 [2024-11-20 15:40:58.045465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.310 [2024-11-20 15:40:58.089581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.310 [2024-11-20 15:40:58.089950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:12.310 [2024-11-20 15:40:58.090061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.973 ms 00:29:12.310 [2024-11-20 15:40:58.090101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.310 [2024-11-20 15:40:58.132863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.310 [2024-11-20 15:40:58.133059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:12.310 [2024-11-20 15:40:58.133162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.653 ms 00:29:12.310 [2024-11-20 15:40:58.133200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.310 [2024-11-20 15:40:58.178273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.310 [2024-11-20 15:40:58.178774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:12.310 [2024-11-20 15:40:58.178823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.848 ms 00:29:12.310 [2024-11-20 15:40:58.178837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.310 [2024-11-20 15:40:58.225063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.310 [2024-11-20 15:40:58.225169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:12.310 [2024-11-20 15:40:58.225191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.012 ms 00:29:12.310 [2024-11-20 15:40:58.225203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.310 [2024-11-20 15:40:58.225317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:12.310 [2024-11-20 15:40:58.225346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.225999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:12.310 [2024-11-20 15:40:58.226227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:12.311 [2024-11-20 15:40:58.226803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:12.311 [2024-11-20 15:40:58.226827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:29:12.311 [2024-11-20 15:40:58.226851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:12.311 [2024-11-20 15:40:58.226863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:12.311 [2024-11-20 15:40:58.226876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:12.311 [2024-11-20 15:40:58.226890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:12.311 [2024-11-20 15:40:58.226901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:12.311 [2024-11-20 15:40:58.226914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:12.311 [2024-11-20 15:40:58.226926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:12.311 [2024-11-20 15:40:58.226959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:12.311 [2024-11-20 15:40:58.226970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:12.311 [2024-11-20 15:40:58.226992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.311 [2024-11-20 15:40:58.227005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:12.311 [2024-11-20 15:40:58.227020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:29:12.311 [2024-11-20 15:40:58.227033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.311 [2024-11-20 15:40:58.249861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.311 [2024-11-20 15:40:58.249960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:12.311 [2024-11-20 15:40:58.249979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.741 ms 00:29:12.311 [2024-11-20 15:40:58.249991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.311 [2024-11-20 15:40:58.250650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.311 [2024-11-20 15:40:58.250671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:12.311 [2024-11-20 15:40:58.250686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:29:12.311 [2024-11-20 15:40:58.250698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.570 [2024-11-20 15:40:58.307027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.570 [2024-11-20 15:40:58.307350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.570 [2024-11-20 15:40:58.307382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.570 [2024-11-20 15:40:58.307395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.570 [2024-11-20 15:40:58.307489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.570 [2024-11-20 15:40:58.307503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.570 [2024-11-20 15:40:58.307515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.570 [2024-11-20 15:40:58.307527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.570 [2024-11-20 15:40:58.307684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.570 [2024-11-20 15:40:58.307713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.570 [2024-11-20 15:40:58.307724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.570 [2024-11-20 15:40:58.307734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.570 [2024-11-20 15:40:58.307753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.570 [2024-11-20 15:40:58.307764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.570 [2024-11-20 15:40:58.307775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.570 [2024-11-20 15:40:58.307786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.570 [2024-11-20 15:40:58.433455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.570 [2024-11-20 15:40:58.433536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.570 [2024-11-20 15:40:58.433553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.570 [2024-11-20 15:40:58.433563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.538652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.538926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.830 [2024-11-20 15:40:58.538953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.538965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.830 [2024-11-20 15:40:58.539098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.830 [2024-11-20 15:40:58.539182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.830 [2024-11-20 15:40:58.539358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:12.830 [2024-11-20 15:40:58.539439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.830 [2024-11-20 15:40:58.539518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.830 [2024-11-20 15:40:58.539616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.830 [2024-11-20 15:40:58.539628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.830 [2024-11-20 15:40:58.539639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.830 [2024-11-20 15:40:58.539778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.860 ms, result 0 00:29:14.206 00:29:14.206 00:29:14.206 15:40:59 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:14.206 [2024-11-20 15:41:00.006319] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:14.206 [2024-11-20 15:41:00.006442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79789 ] 00:29:14.465 [2024-11-20 15:41:00.176936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.465 [2024-11-20 15:41:00.288069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.723 [2024-11-20 15:41:00.656009] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.723 [2024-11-20 15:41:00.656080] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.981 [2024-11-20 15:41:00.817247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.981 [2024-11-20 15:41:00.817515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:14.981 [2024-11-20 15:41:00.817548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:14.981 [2024-11-20 15:41:00.817559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.981 [2024-11-20 15:41:00.817643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.981 [2024-11-20 15:41:00.817657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.981 [2024-11-20 15:41:00.817672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:14.981 [2024-11-20 15:41:00.817683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.981 [2024-11-20 15:41:00.817706] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:14.981 [2024-11-20 15:41:00.818640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:14.981 [2024-11-20 15:41:00.818663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.981 [2024-11-20 15:41:00.818674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.981 [2024-11-20 15:41:00.818685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:29:14.981 [2024-11-20 15:41:00.818696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.981 [2024-11-20 15:41:00.820129] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:14.981 [2024-11-20 15:41:00.839308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.981 [2024-11-20 15:41:00.839346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:14.981 [2024-11-20 15:41:00.839361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.179 ms 00:29:14.981 [2024-11-20 15:41:00.839389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.981 [2024-11-20 15:41:00.839457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.981 [2024-11-20 15:41:00.839470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:14.981 [2024-11-20 15:41:00.839481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:14.981 [2024-11-20 15:41:00.839491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.981 [2024-11-20 15:41:00.846280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.846309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.982 [2024-11-20 15:41:00.846321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.714 ms 00:29:14.982 [2024-11-20 15:41:00.846335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.846413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.846427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.982 [2024-11-20 15:41:00.846438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:14.982 [2024-11-20 15:41:00.846448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.846490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.846502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:14.982 [2024-11-20 15:41:00.846513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:14.982 [2024-11-20 15:41:00.846523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.846554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:14.982 [2024-11-20 15:41:00.851561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.851706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.982 [2024-11-20 15:41:00.851853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.018 ms 00:29:14.982 [2024-11-20 15:41:00.851899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.851956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.851988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:14.982 [2024-11-20 15:41:00.852019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:14.982 [2024-11-20 15:41:00.852104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.852192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:14.982 [2024-11-20 15:41:00.852241] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:14.982 [2024-11-20 15:41:00.852315] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:14.982 [2024-11-20 15:41:00.852492] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:14.982 [2024-11-20 15:41:00.852634] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:14.982 [2024-11-20 15:41:00.852749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:14.982 [2024-11-20 15:41:00.852802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:14.982 [2024-11-20 15:41:00.852854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:14.982 [2024-11-20 15:41:00.852955] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853006] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:14.982 [2024-11-20 15:41:00.853036] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:14.982 [2024-11-20 15:41:00.853066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:14.982 [2024-11-20 15:41:00.853139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:14.982 [2024-11-20 15:41:00.853176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.853207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:14.982 [2024-11-20 15:41:00.853238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:29:14.982 [2024-11-20 15:41:00.853269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.853375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.982 [2024-11-20 15:41:00.853439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:14.982 [2024-11-20 15:41:00.853470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:14.982 [2024-11-20 15:41:00.853500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.982 [2024-11-20 15:41:00.853631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:14.982 [2024-11-20 15:41:00.853672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:14.982 [2024-11-20 15:41:00.853685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:14.982 [2024-11-20 15:41:00.853716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:14.982 [2024-11-20 15:41:00.853744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.982 [2024-11-20 15:41:00.853763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:14.982 [2024-11-20 15:41:00.853773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:14.982 [2024-11-20 15:41:00.853782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.982 [2024-11-20 15:41:00.853791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:14.982 [2024-11-20 15:41:00.853801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:14.982 [2024-11-20 15:41:00.853821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:14.982 [2024-11-20 15:41:00.853840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:14.982 [2024-11-20 15:41:00.853868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:14.982 [2024-11-20 15:41:00.853896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:14.982 [2024-11-20 15:41:00.853925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:14.982 [2024-11-20 15:41:00.853952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.982 [2024-11-20 15:41:00.853971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:14.982 [2024-11-20 15:41:00.853980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:14.982 [2024-11-20 15:41:00.853989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.982 [2024-11-20 15:41:00.853998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:14.982 [2024-11-20 15:41:00.854007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:14.982 [2024-11-20 15:41:00.854017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.982 [2024-11-20 15:41:00.854026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:14.982 [2024-11-20 15:41:00.854036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:14.982 [2024-11-20 15:41:00.854045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.854055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:14.982 [2024-11-20 15:41:00.854064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:14.982 [2024-11-20 15:41:00.854073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.854082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:14.982 [2024-11-20 15:41:00.854093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:14.982 [2024-11-20 15:41:00.854102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.982 [2024-11-20 15:41:00.854112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.982 [2024-11-20 15:41:00.854122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:14.982 [2024-11-20 15:41:00.854132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:14.982 [2024-11-20 15:41:00.854141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:14.982 [2024-11-20 15:41:00.854150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:14.982 [2024-11-20 15:41:00.854159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:14.982 [2024-11-20 15:41:00.854168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:14.982 [2024-11-20 15:41:00.854180] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:14.982 [2024-11-20 15:41:00.854192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.982 [2024-11-20 15:41:00.854204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:14.982 [2024-11-20 15:41:00.854215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:14.982 [2024-11-20 15:41:00.854227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:14.982 [2024-11-20 15:41:00.854238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:14.982 [2024-11-20 15:41:00.854248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:14.983 [2024-11-20 15:41:00.854259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:14.983 [2024-11-20 15:41:00.854270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:14.983 [2024-11-20 15:41:00.854281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:14.983 [2024-11-20 15:41:00.854291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:14.983 [2024-11-20 15:41:00.854302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:14.983 [2024-11-20 15:41:00.854354] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:14.983 [2024-11-20 15:41:00.854368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.983 [2024-11-20 15:41:00.854390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:14.983 [2024-11-20 15:41:00.854401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:14.983 [2024-11-20 15:41:00.854412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:14.983 [2024-11-20 15:41:00.854423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.983 [2024-11-20 15:41:00.854433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:14.983 [2024-11-20 15:41:00.854443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:29:14.983 [2024-11-20 15:41:00.854454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.983 [2024-11-20 15:41:00.893251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.983 [2024-11-20 15:41:00.893296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:14.983 [2024-11-20 15:41:00.893310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.747 ms 00:29:14.983 [2024-11-20 15:41:00.893321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.983 [2024-11-20 15:41:00.893415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.983 [2024-11-20 15:41:00.893426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:14.983 [2024-11-20 15:41:00.893437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:14.983 [2024-11-20 15:41:00.893447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.241 [2024-11-20 15:41:00.949078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.949308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:15.242 [2024-11-20 15:41:00.949333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.558 ms 00:29:15.242 [2024-11-20 15:41:00.949345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:00.949398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.949410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.242 [2024-11-20 15:41:00.949427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:15.242 [2024-11-20 15:41:00.949437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:00.949950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.949966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.242 [2024-11-20 15:41:00.949977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:15.242 [2024-11-20 15:41:00.949987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:00.950106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.950119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.242 [2024-11-20 15:41:00.950130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:15.242 [2024-11-20 15:41:00.950146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:00.968285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.968454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.242 [2024-11-20 15:41:00.968483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.117 ms 00:29:15.242 [2024-11-20 15:41:00.968494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:00.987502] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:15.242 [2024-11-20 15:41:00.987672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:15.242 [2024-11-20 15:41:00.987694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:00.987706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:15.242 [2024-11-20 15:41:00.987717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.050 ms 00:29:15.242 [2024-11-20 15:41:00.987727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.018168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.018336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:15.242 [2024-11-20 15:41:01.018357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.400 ms 00:29:15.242 [2024-11-20 15:41:01.018368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.037748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.037786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:15.242 [2024-11-20 15:41:01.037800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:29:15.242 [2024-11-20 15:41:01.037810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.056694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.056737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:15.242 [2024-11-20 15:41:01.056750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.843 ms 00:29:15.242 [2024-11-20 15:41:01.056760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.057619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.057654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:15.242 [2024-11-20 15:41:01.057667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:29:15.242 [2024-11-20 15:41:01.057682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.147390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.147698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:15.242 [2024-11-20 15:41:01.147732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.683 ms 00:29:15.242 [2024-11-20 15:41:01.147744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.159324] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:15.242 [2024-11-20 15:41:01.162656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.162689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:15.242 [2024-11-20 15:41:01.162704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:29:15.242 [2024-11-20 15:41:01.162715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.162823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.162836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:15.242 [2024-11-20 15:41:01.162848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:15.242 [2024-11-20 15:41:01.162862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.162952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.162965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:15.242 [2024-11-20 15:41:01.162976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:15.242 [2024-11-20 15:41:01.162986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.163009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.163020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:15.242 [2024-11-20 15:41:01.163031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:15.242 [2024-11-20 15:41:01.163041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.242 [2024-11-20 15:41:01.163075] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:15.242 [2024-11-20 15:41:01.163087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.242 [2024-11-20 15:41:01.163097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:15.242 [2024-11-20 15:41:01.163107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:15.242 [2024-11-20 15:41:01.163117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.500 [2024-11-20 15:41:01.201275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.500 [2024-11-20 15:41:01.201316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:15.500 [2024-11-20 15:41:01.201331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.134 ms 00:29:15.501 [2024-11-20 15:41:01.201348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.501 [2024-11-20 15:41:01.201427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.501 [2024-11-20 15:41:01.201439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:15.501 [2024-11-20 15:41:01.201451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:15.501 [2024-11-20 15:41:01.201461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.501 [2024-11-20 15:41:01.202813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.035 ms, result 0 00:29:16.878  [2024-11-20T15:41:03.454Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T15:41:04.830Z] Copying: 65/1024 [MB] (33 MBps) [2024-11-20T15:41:05.767Z] Copying: 96/1024 [MB] (31 MBps) [2024-11-20T15:41:06.704Z] Copying: 129/1024 [MB] (32 MBps) [2024-11-20T15:41:07.640Z] Copying: 161/1024 [MB] (32 MBps) [2024-11-20T15:41:08.576Z] Copying: 194/1024 [MB] (32 MBps) [2024-11-20T15:41:09.540Z] Copying: 226/1024 [MB] (32 MBps) [2024-11-20T15:41:10.474Z] Copying: 257/1024 [MB] (31 MBps) [2024-11-20T15:41:11.851Z] Copying: 290/1024 [MB] (33 MBps) [2024-11-20T15:41:12.786Z] Copying: 324/1024 [MB] (33 MBps) [2024-11-20T15:41:13.721Z] Copying: 357/1024 [MB] (32 MBps) [2024-11-20T15:41:14.657Z] Copying: 390/1024 [MB] (33 MBps) [2024-11-20T15:41:15.593Z] Copying: 423/1024 [MB] (32 MBps) [2024-11-20T15:41:16.529Z] Copying: 456/1024 [MB] (32 MBps) [2024-11-20T15:41:17.473Z] Copying: 489/1024 [MB] (33 MBps) [2024-11-20T15:41:18.848Z] Copying: 523/1024 [MB] (33 MBps) [2024-11-20T15:41:19.782Z] Copying: 557/1024 [MB] (34 MBps) [2024-11-20T15:41:20.717Z] Copying: 591/1024 [MB] (34 MBps) [2024-11-20T15:41:21.654Z] Copying: 626/1024 [MB] (34 MBps) [2024-11-20T15:41:22.589Z] Copying: 659/1024 [MB] (33 MBps) [2024-11-20T15:41:23.524Z] Copying: 693/1024 [MB] (33 MBps) [2024-11-20T15:41:24.461Z] Copying: 726/1024 [MB] (33 MBps) [2024-11-20T15:41:25.837Z] Copying: 760/1024 [MB] (33 MBps) [2024-11-20T15:41:26.771Z] Copying: 794/1024 [MB] (33 MBps) [2024-11-20T15:41:27.706Z] Copying: 827/1024 [MB] (33 MBps) [2024-11-20T15:41:28.641Z] Copying: 859/1024 [MB] (31 MBps) [2024-11-20T15:41:29.577Z] Copying: 890/1024 [MB] (31 MBps) [2024-11-20T15:41:30.570Z] Copying: 921/1024 [MB] (30 MBps) [2024-11-20T15:41:31.505Z] Copying: 951/1024 [MB] (30 MBps) [2024-11-20T15:41:32.441Z] Copying: 982/1024 [MB] (30 MBps) [2024-11-20T15:41:33.008Z] Copying: 1012/1024 [MB] (29 MBps) [2024-11-20T15:41:33.267Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 15:41:33.166861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.167147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:47.309 [2024-11-20 15:41:33.167198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.309 [2024-11-20 15:41:33.167214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.167264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:47.309 [2024-11-20 15:41:33.172853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.172907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:47.309 [2024-11-20 15:41:33.172937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.559 ms 00:29:47.309 [2024-11-20 15:41:33.172951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.173193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.173214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:47.309 [2024-11-20 15:41:33.173230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:29:47.309 [2024-11-20 15:41:33.173243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.177619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.177660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:47.309 [2024-11-20 15:41:33.177676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.353 ms 00:29:47.309 [2024-11-20 15:41:33.177691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.184087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.184135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:47.309 [2024-11-20 15:41:33.184151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.354 ms 00:29:47.309 [2024-11-20 15:41:33.184181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.227844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.228074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:47.309 [2024-11-20 15:41:33.228107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.556 ms 00:29:47.309 [2024-11-20 15:41:33.228121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.251652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.251739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:47.309 [2024-11-20 15:41:33.251759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:29:47.309 [2024-11-20 15:41:33.251772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.309 [2024-11-20 15:41:33.251938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.309 [2024-11-20 15:41:33.251968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:47.309 [2024-11-20 15:41:33.251982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:47.309 [2024-11-20 15:41:33.251995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.569 [2024-11-20 15:41:33.292412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.569 [2024-11-20 15:41:33.292488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:47.569 [2024-11-20 15:41:33.292507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.391 ms 00:29:47.569 [2024-11-20 15:41:33.292520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.569 [2024-11-20 15:41:33.330645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.569 [2024-11-20 15:41:33.330913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:47.569 [2024-11-20 15:41:33.330943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.040 ms 00:29:47.569 [2024-11-20 15:41:33.330955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.569 [2024-11-20 15:41:33.368355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.569 [2024-11-20 15:41:33.368413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:47.569 [2024-11-20 15:41:33.368431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.343 ms 00:29:47.569 [2024-11-20 15:41:33.368443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.569 [2024-11-20 15:41:33.406319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.569 [2024-11-20 15:41:33.406382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:47.569 [2024-11-20 15:41:33.406417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.767 ms 00:29:47.569 [2024-11-20 15:41:33.406432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.569 [2024-11-20 15:41:33.406522] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:47.569 [2024-11-20 15:41:33.406544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.406997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:47.569 [2024-11-20 15:41:33.407248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.407984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:47.570 [2024-11-20 15:41:33.408007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:47.570 [2024-11-20 15:41:33.408025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:29:47.570 [2024-11-20 15:41:33.408039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:47.570 [2024-11-20 15:41:33.408052] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:47.570 [2024-11-20 15:41:33.408065] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:47.570 [2024-11-20 15:41:33.408079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:47.570 [2024-11-20 15:41:33.408092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:47.570 [2024-11-20 15:41:33.408106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:47.570 [2024-11-20 15:41:33.408133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:47.570 [2024-11-20 15:41:33.408146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:47.570 [2024-11-20 15:41:33.408158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:47.570 [2024-11-20 15:41:33.408171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.570 [2024-11-20 15:41:33.408185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:47.570 [2024-11-20 15:41:33.408199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:29:47.570 [2024-11-20 15:41:33.408212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.430348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.570 [2024-11-20 15:41:33.430402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:47.570 [2024-11-20 15:41:33.430419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.077 ms 00:29:47.570 [2024-11-20 15:41:33.430432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.431071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.570 [2024-11-20 15:41:33.431098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:47.570 [2024-11-20 15:41:33.431113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:29:47.570 [2024-11-20 15:41:33.431136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.484443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.570 [2024-11-20 15:41:33.484517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.570 [2024-11-20 15:41:33.484535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.570 [2024-11-20 15:41:33.484548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.484663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.570 [2024-11-20 15:41:33.484678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.570 [2024-11-20 15:41:33.484690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.570 [2024-11-20 15:41:33.484710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.484806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.570 [2024-11-20 15:41:33.484821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.570 [2024-11-20 15:41:33.484835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.570 [2024-11-20 15:41:33.484846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.570 [2024-11-20 15:41:33.484867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.570 [2024-11-20 15:41:33.484880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.570 [2024-11-20 15:41:33.484893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.570 [2024-11-20 15:41:33.484905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.829 [2024-11-20 15:41:33.615142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.829 [2024-11-20 15:41:33.615219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.829 [2024-11-20 15:41:33.615239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.829 [2024-11-20 15:41:33.615253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.829 [2024-11-20 15:41:33.726624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.829 [2024-11-20 15:41:33.726898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.829 [2024-11-20 15:41:33.726929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.829 [2024-11-20 15:41:33.726954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.830 [2024-11-20 15:41:33.727093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.830 [2024-11-20 15:41:33.727192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.830 [2024-11-20 15:41:33.727402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:47.830 [2024-11-20 15:41:33.727497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.830 [2024-11-20 15:41:33.727618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.830 [2024-11-20 15:41:33.727715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.830 [2024-11-20 15:41:33.727729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.830 [2024-11-20 15:41:33.727742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.830 [2024-11-20 15:41:33.727891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.982 ms, result 0 00:29:49.206 00:29:49.206 00:29:49.206 15:41:34 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:51.110 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:51.110 15:41:36 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:51.110 [2024-11-20 15:41:36.893896] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:51.110 [2024-11-20 15:41:36.894250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80159 ] 00:29:51.369 [2024-11-20 15:41:37.067471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.369 [2024-11-20 15:41:37.186382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.628 [2024-11-20 15:41:37.566767] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:51.628 [2024-11-20 15:41:37.566832] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:51.887 [2024-11-20 15:41:37.728270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.728521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:51.887 [2024-11-20 15:41:37.728557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:51.887 [2024-11-20 15:41:37.728593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.728697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.728715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.887 [2024-11-20 15:41:37.728733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:51.887 [2024-11-20 15:41:37.728746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.728773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:51.887 [2024-11-20 15:41:37.730094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:51.887 [2024-11-20 15:41:37.730136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.730148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.887 [2024-11-20 15:41:37.730160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:29:51.887 [2024-11-20 15:41:37.730171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.731723] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:51.887 [2024-11-20 15:41:37.751707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.751749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:51.887 [2024-11-20 15:41:37.751763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.985 ms 00:29:51.887 [2024-11-20 15:41:37.751774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.751844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.751858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:51.887 [2024-11-20 15:41:37.751869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:51.887 [2024-11-20 15:41:37.751879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.758659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.758823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.887 [2024-11-20 15:41:37.758845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:29:51.887 [2024-11-20 15:41:37.758863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.758950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.758963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.887 [2024-11-20 15:41:37.758973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:51.887 [2024-11-20 15:41:37.758983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.887 [2024-11-20 15:41:37.759029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.887 [2024-11-20 15:41:37.759041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:51.888 [2024-11-20 15:41:37.759052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:51.888 [2024-11-20 15:41:37.759062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.888 [2024-11-20 15:41:37.759093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:51.888 [2024-11-20 15:41:37.764164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.888 [2024-11-20 15:41:37.764198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.888 [2024-11-20 15:41:37.764211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:29:51.888 [2024-11-20 15:41:37.764225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.888 [2024-11-20 15:41:37.764257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.888 [2024-11-20 15:41:37.764269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:51.888 [2024-11-20 15:41:37.764280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:51.888 [2024-11-20 15:41:37.764289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.888 [2024-11-20 15:41:37.764345] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:51.888 [2024-11-20 15:41:37.764369] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:51.888 [2024-11-20 15:41:37.764406] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:51.888 [2024-11-20 15:41:37.764427] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:51.888 [2024-11-20 15:41:37.764520] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:51.888 [2024-11-20 15:41:37.764535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:51.888 [2024-11-20 15:41:37.764548] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:51.888 [2024-11-20 15:41:37.764561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:51.888 [2024-11-20 15:41:37.764594] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:51.888 [2024-11-20 15:41:37.764605] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:51.888 [2024-11-20 15:41:37.764615] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:51.888 [2024-11-20 15:41:37.764626] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:51.888 [2024-11-20 15:41:37.764639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:51.888 [2024-11-20 15:41:37.764650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.888 [2024-11-20 15:41:37.764660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:51.888 [2024-11-20 15:41:37.764671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:29:51.888 [2024-11-20 15:41:37.764681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.888 [2024-11-20 15:41:37.764755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.888 [2024-11-20 15:41:37.764765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:51.888 [2024-11-20 15:41:37.764776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:51.888 [2024-11-20 15:41:37.764785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.888 [2024-11-20 15:41:37.764886] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:51.888 [2024-11-20 15:41:37.764901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:51.888 [2024-11-20 15:41:37.764911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:51.888 [2024-11-20 15:41:37.764921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.764931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:51.888 [2024-11-20 15:41:37.764941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.764950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:51.888 [2024-11-20 15:41:37.764960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:51.888 [2024-11-20 15:41:37.764969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:51.888 [2024-11-20 15:41:37.764978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:51.888 [2024-11-20 15:41:37.765004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:51.888 [2024-11-20 15:41:37.765015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:51.888 [2024-11-20 15:41:37.765025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:51.888 [2024-11-20 15:41:37.765035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:51.888 [2024-11-20 15:41:37.765045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:51.888 [2024-11-20 15:41:37.765065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:51.888 [2024-11-20 15:41:37.765085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:51.888 [2024-11-20 15:41:37.765116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:51.888 [2024-11-20 15:41:37.765146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:51.888 [2024-11-20 15:41:37.765175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:51.888 [2024-11-20 15:41:37.765206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:51.888 [2024-11-20 15:41:37.765235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:51.888 [2024-11-20 15:41:37.765254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:51.888 [2024-11-20 15:41:37.765264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:51.888 [2024-11-20 15:41:37.765274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:51.888 [2024-11-20 15:41:37.765283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:51.888 [2024-11-20 15:41:37.765293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:51.888 [2024-11-20 15:41:37.765303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:51.888 [2024-11-20 15:41:37.765330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:51.888 [2024-11-20 15:41:37.765340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:51.888 [2024-11-20 15:41:37.765360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:51.888 [2024-11-20 15:41:37.765372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:51.888 [2024-11-20 15:41:37.765382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.888 [2024-11-20 15:41:37.765392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:51.888 [2024-11-20 15:41:37.765402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:51.888 [2024-11-20 15:41:37.765413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:51.888 [2024-11-20 15:41:37.765423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:51.888 [2024-11-20 15:41:37.765432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:51.889 [2024-11-20 15:41:37.765442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:51.889 [2024-11-20 15:41:37.765454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:51.889 [2024-11-20 15:41:37.765467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:51.889 [2024-11-20 15:41:37.765492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:51.889 [2024-11-20 15:41:37.765503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:51.889 [2024-11-20 15:41:37.765514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:51.889 [2024-11-20 15:41:37.765526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:51.889 [2024-11-20 15:41:37.765537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:51.889 [2024-11-20 15:41:37.765548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:51.889 [2024-11-20 15:41:37.765559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:51.889 [2024-11-20 15:41:37.765570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:51.889 [2024-11-20 15:41:37.765581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:51.889 [2024-11-20 15:41:37.765649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:51.889 [2024-11-20 15:41:37.765665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:51.889 [2024-11-20 15:41:37.765688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:51.889 [2024-11-20 15:41:37.765700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:51.889 [2024-11-20 15:41:37.765712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:51.889 [2024-11-20 15:41:37.765724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.889 [2024-11-20 15:41:37.765734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:51.889 [2024-11-20 15:41:37.765746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:29:51.889 [2024-11-20 15:41:37.765756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.889 [2024-11-20 15:41:37.806661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.889 [2024-11-20 15:41:37.806710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.889 [2024-11-20 15:41:37.806726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.851 ms 00:29:51.889 [2024-11-20 15:41:37.806737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.889 [2024-11-20 15:41:37.806835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.889 [2024-11-20 15:41:37.806847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:51.889 [2024-11-20 15:41:37.806858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:51.889 [2024-11-20 15:41:37.806868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.864191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.148 [2024-11-20 15:41:37.864238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.148 [2024-11-20 15:41:37.864253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.246 ms 00:29:52.148 [2024-11-20 15:41:37.864263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.864319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.148 [2024-11-20 15:41:37.864331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.148 [2024-11-20 15:41:37.864346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:52.148 [2024-11-20 15:41:37.864356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.864865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.148 [2024-11-20 15:41:37.864881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.148 [2024-11-20 15:41:37.864893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:29:52.148 [2024-11-20 15:41:37.864904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.865024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.148 [2024-11-20 15:41:37.865038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.148 [2024-11-20 15:41:37.865048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:52.148 [2024-11-20 15:41:37.865065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.884933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.148 [2024-11-20 15:41:37.884977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.148 [2024-11-20 15:41:37.884996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.846 ms 00:29:52.148 [2024-11-20 15:41:37.885007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.148 [2024-11-20 15:41:37.904432] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:52.148 [2024-11-20 15:41:37.904474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:52.149 [2024-11-20 15:41:37.904491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:37.904502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:52.149 [2024-11-20 15:41:37.904514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:29:52.149 [2024-11-20 15:41:37.904524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:37.934782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:37.934956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:52.149 [2024-11-20 15:41:37.934978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.212 ms 00:29:52.149 [2024-11-20 15:41:37.934990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:37.953887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:37.953927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:52.149 [2024-11-20 15:41:37.953940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.854 ms 00:29:52.149 [2024-11-20 15:41:37.953951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:37.972895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:37.973039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:52.149 [2024-11-20 15:41:37.973059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.898 ms 00:29:52.149 [2024-11-20 15:41:37.973069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:37.973897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:37.973925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:52.149 [2024-11-20 15:41:37.973938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:29:52.149 [2024-11-20 15:41:37.973953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.061449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.061518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:52.149 [2024-11-20 15:41:38.061542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.471 ms 00:29:52.149 [2024-11-20 15:41:38.061553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.073361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:52.149 [2024-11-20 15:41:38.076644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.076680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:52.149 [2024-11-20 15:41:38.076696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.008 ms 00:29:52.149 [2024-11-20 15:41:38.076707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.076815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.076829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:52.149 [2024-11-20 15:41:38.076841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:52.149 [2024-11-20 15:41:38.076854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.076948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.076961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:52.149 [2024-11-20 15:41:38.076972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:52.149 [2024-11-20 15:41:38.076982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.077007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.077018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:52.149 [2024-11-20 15:41:38.077029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.149 [2024-11-20 15:41:38.077039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.149 [2024-11-20 15:41:38.077075] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:52.149 [2024-11-20 15:41:38.077087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.149 [2024-11-20 15:41:38.077098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:52.149 [2024-11-20 15:41:38.077108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:52.149 [2024-11-20 15:41:38.077118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.408 [2024-11-20 15:41:38.115620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.408 [2024-11-20 15:41:38.115665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:52.408 [2024-11-20 15:41:38.115681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.477 ms 00:29:52.408 [2024-11-20 15:41:38.115700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.408 [2024-11-20 15:41:38.115791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.408 [2024-11-20 15:41:38.115804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:52.408 [2024-11-20 15:41:38.115815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:52.408 [2024-11-20 15:41:38.115825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.408 [2024-11-20 15:41:38.117015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.256 ms, result 0 00:29:53.346  [2024-11-20T15:41:40.238Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-20T15:41:41.173Z] Copying: 65/1024 [MB] (33 MBps) [2024-11-20T15:41:42.550Z] Copying: 97/1024 [MB] (32 MBps) [2024-11-20T15:41:43.488Z] Copying: 129/1024 [MB] (31 MBps) [2024-11-20T15:41:44.425Z] Copying: 160/1024 [MB] (30 MBps) [2024-11-20T15:41:45.362Z] Copying: 190/1024 [MB] (30 MBps) [2024-11-20T15:41:46.297Z] Copying: 221/1024 [MB] (31 MBps) [2024-11-20T15:41:47.233Z] Copying: 253/1024 [MB] (31 MBps) [2024-11-20T15:41:48.168Z] Copying: 285/1024 [MB] (31 MBps) [2024-11-20T15:41:49.546Z] Copying: 316/1024 [MB] (31 MBps) [2024-11-20T15:41:50.482Z] Copying: 348/1024 [MB] (32 MBps) [2024-11-20T15:41:51.417Z] Copying: 380/1024 [MB] (31 MBps) [2024-11-20T15:41:52.395Z] Copying: 412/1024 [MB] (32 MBps) [2024-11-20T15:41:53.329Z] Copying: 443/1024 [MB] (31 MBps) [2024-11-20T15:41:54.265Z] Copying: 475/1024 [MB] (31 MBps) [2024-11-20T15:41:55.200Z] Copying: 506/1024 [MB] (30 MBps) [2024-11-20T15:41:56.136Z] Copying: 538/1024 [MB] (31 MBps) [2024-11-20T15:41:57.513Z] Copying: 569/1024 [MB] (31 MBps) [2024-11-20T15:41:58.451Z] Copying: 600/1024 [MB] (31 MBps) [2024-11-20T15:41:59.386Z] Copying: 630/1024 [MB] (30 MBps) [2024-11-20T15:42:00.351Z] Copying: 662/1024 [MB] (31 MBps) [2024-11-20T15:42:01.286Z] Copying: 694/1024 [MB] (32 MBps) [2024-11-20T15:42:02.221Z] Copying: 726/1024 [MB] (32 MBps) [2024-11-20T15:42:03.156Z] Copying: 759/1024 [MB] (32 MBps) [2024-11-20T15:42:04.533Z] Copying: 791/1024 [MB] (32 MBps) [2024-11-20T15:42:05.469Z] Copying: 824/1024 [MB] (32 MBps) [2024-11-20T15:42:06.404Z] Copying: 857/1024 [MB] (33 MBps) [2024-11-20T15:42:07.341Z] Copying: 890/1024 [MB] (32 MBps) [2024-11-20T15:42:08.283Z] Copying: 922/1024 [MB] (32 MBps) [2024-11-20T15:42:09.219Z] Copying: 955/1024 [MB] (32 MBps) [2024-11-20T15:42:10.155Z] Copying: 988/1024 [MB] (33 MBps) [2024-11-20T15:42:11.532Z] Copying: 1022/1024 [MB] (33 MBps) [2024-11-20T15:42:11.532Z] Copying: 1048540/1048576 [kB] (1584 kBps) [2024-11-20T15:42:11.532Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 15:42:11.177393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.177711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:25.574 [2024-11-20 15:42:11.177828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:25.574 [2024-11-20 15:42:11.177889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.179248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:25.574 [2024-11-20 15:42:11.187850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.188074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:25.574 [2024-11-20 15:42:11.188201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.365 ms 00:30:25.574 [2024-11-20 15:42:11.188221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.199782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.199856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:25.574 [2024-11-20 15:42:11.199873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.168 ms 00:30:25.574 [2024-11-20 15:42:11.199897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.223192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.223300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:25.574 [2024-11-20 15:42:11.223320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.267 ms 00:30:25.574 [2024-11-20 15:42:11.223333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.229364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.229627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:25.574 [2024-11-20 15:42:11.229657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:30:25.574 [2024-11-20 15:42:11.229670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.276020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.276317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:25.574 [2024-11-20 15:42:11.276347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.248 ms 00:30:25.574 [2024-11-20 15:42:11.276360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.302021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.302103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:25.574 [2024-11-20 15:42:11.302121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:30:25.574 [2024-11-20 15:42:11.302133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.382331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.382652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:25.574 [2024-11-20 15:42:11.382683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.109 ms 00:30:25.574 [2024-11-20 15:42:11.382697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.427802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.428089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:25.574 [2024-11-20 15:42:11.428118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.066 ms 00:30:25.574 [2024-11-20 15:42:11.428129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.472505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.472615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:25.574 [2024-11-20 15:42:11.472632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.306 ms 00:30:25.574 [2024-11-20 15:42:11.472660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.574 [2024-11-20 15:42:11.516531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.574 [2024-11-20 15:42:11.516850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:25.574 [2024-11-20 15:42:11.516894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.791 ms 00:30:25.574 [2024-11-20 15:42:11.516907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.838 [2024-11-20 15:42:11.560393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.838 [2024-11-20 15:42:11.560468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:25.838 [2024-11-20 15:42:11.560485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.314 ms 00:30:25.838 [2024-11-20 15:42:11.560495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.838 [2024-11-20 15:42:11.560605] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:25.838 [2024-11-20 15:42:11.560625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117760 / 261120 wr_cnt: 1 state: open 00:30:25.838 [2024-11-20 15:42:11.560641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.560996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:25.838 [2024-11-20 15:42:11.561545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:25.839 [2024-11-20 15:42:11.561874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:25.839 [2024-11-20 15:42:11.561886] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:30:25.839 [2024-11-20 15:42:11.561898] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117760 00:30:25.839 [2024-11-20 15:42:11.561910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118720 00:30:25.839 [2024-11-20 15:42:11.561921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117760 00:30:25.839 [2024-11-20 15:42:11.561934] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:30:25.839 [2024-11-20 15:42:11.561946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:25.839 [2024-11-20 15:42:11.561967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:25.839 [2024-11-20 15:42:11.561992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:25.839 [2024-11-20 15:42:11.562003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:25.839 [2024-11-20 15:42:11.562014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:25.839 [2024-11-20 15:42:11.562025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.839 [2024-11-20 15:42:11.562038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:25.839 [2024-11-20 15:42:11.562049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:30:25.839 [2024-11-20 15:42:11.562061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.584631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.839 [2024-11-20 15:42:11.584922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:25.839 [2024-11-20 15:42:11.584950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.509 ms 00:30:25.839 [2024-11-20 15:42:11.584974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.585665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.839 [2024-11-20 15:42:11.585683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:25.839 [2024-11-20 15:42:11.585696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:30:25.839 [2024-11-20 15:42:11.585707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.645054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.839 [2024-11-20 15:42:11.645131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.839 [2024-11-20 15:42:11.645148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.839 [2024-11-20 15:42:11.645159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.645244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.839 [2024-11-20 15:42:11.645256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.839 [2024-11-20 15:42:11.645268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.839 [2024-11-20 15:42:11.645279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.645403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.839 [2024-11-20 15:42:11.645418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.839 [2024-11-20 15:42:11.645435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.839 [2024-11-20 15:42:11.645446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.645465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.839 [2024-11-20 15:42:11.645476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.839 [2024-11-20 15:42:11.645488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.839 [2024-11-20 15:42:11.645499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.839 [2024-11-20 15:42:11.783376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.839 [2024-11-20 15:42:11.783453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.839 [2024-11-20 15:42:11.783497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.839 [2024-11-20 15:42:11.783509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.898504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.898837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.105 [2024-11-20 15:42:11.898883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.898896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:26.105 [2024-11-20 15:42:11.899049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:26.105 [2024-11-20 15:42:11.899144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:26.105 [2024-11-20 15:42:11.899320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:26.105 [2024-11-20 15:42:11.899405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:26.105 [2024-11-20 15:42:11.899480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.105 [2024-11-20 15:42:11.899558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:26.105 [2024-11-20 15:42:11.899570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.105 [2024-11-20 15:42:11.899582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.105 [2024-11-20 15:42:11.899739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 725.390 ms, result 0 00:30:27.481 00:30:27.481 00:30:27.481 15:42:13 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:27.739 [2024-11-20 15:42:13.500476] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:30:27.739 [2024-11-20 15:42:13.500653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80525 ] 00:30:27.739 [2024-11-20 15:42:13.674941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.997 [2024-11-20 15:42:13.798408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.258 [2024-11-20 15:42:14.182767] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.258 [2024-11-20 15:42:14.183094] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.518 [2024-11-20 15:42:14.346743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.346795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.519 [2024-11-20 15:42:14.346819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:28.519 [2024-11-20 15:42:14.346832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.346904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.346919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.519 [2024-11-20 15:42:14.346935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:28.519 [2024-11-20 15:42:14.346947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.346973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.519 [2024-11-20 15:42:14.348106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.519 [2024-11-20 15:42:14.348139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.348151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.519 [2024-11-20 15:42:14.348165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:30:28.519 [2024-11-20 15:42:14.348176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.349727] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:28.519 [2024-11-20 15:42:14.374378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.374452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:28.519 [2024-11-20 15:42:14.374472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.648 ms 00:30:28.519 [2024-11-20 15:42:14.374483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.374657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.374673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:28.519 [2024-11-20 15:42:14.374686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:28.519 [2024-11-20 15:42:14.374697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.382469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.382520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.519 [2024-11-20 15:42:14.382535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.652 ms 00:30:28.519 [2024-11-20 15:42:14.382568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.382691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.382710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.519 [2024-11-20 15:42:14.382723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:28.519 [2024-11-20 15:42:14.382734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.382791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.382805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.519 [2024-11-20 15:42:14.382818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:28.519 [2024-11-20 15:42:14.382829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.382864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.519 [2024-11-20 15:42:14.388106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.388151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.519 [2024-11-20 15:42:14.388165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:30:28.519 [2024-11-20 15:42:14.388196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.388238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.388250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.519 [2024-11-20 15:42:14.388262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:28.519 [2024-11-20 15:42:14.388273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.388346] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:28.519 [2024-11-20 15:42:14.388373] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:28.519 [2024-11-20 15:42:14.388413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:28.519 [2024-11-20 15:42:14.388436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:28.519 [2024-11-20 15:42:14.388537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.519 [2024-11-20 15:42:14.388552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.519 [2024-11-20 15:42:14.388566] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.519 [2024-11-20 15:42:14.388581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.519 [2024-11-20 15:42:14.388874] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.519 [2024-11-20 15:42:14.388931] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.519 [2024-11-20 15:42:14.388965] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.519 [2024-11-20 15:42:14.388998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.519 [2024-11-20 15:42:14.389039] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.519 [2024-11-20 15:42:14.389136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.389176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.519 [2024-11-20 15:42:14.389211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:30:28.519 [2024-11-20 15:42:14.389244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.389367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.519 [2024-11-20 15:42:14.389383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.519 [2024-11-20 15:42:14.389395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:28.519 [2024-11-20 15:42:14.389406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.519 [2024-11-20 15:42:14.389518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.519 [2024-11-20 15:42:14.389534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.519 [2024-11-20 15:42:14.389546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.519 [2024-11-20 15:42:14.389600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.519 [2024-11-20 15:42:14.389632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.519 [2024-11-20 15:42:14.389652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.519 [2024-11-20 15:42:14.389663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.519 [2024-11-20 15:42:14.389673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.519 [2024-11-20 15:42:14.389684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.519 [2024-11-20 15:42:14.389694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.519 [2024-11-20 15:42:14.389714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.519 [2024-11-20 15:42:14.389735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.519 [2024-11-20 15:42:14.389767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.519 [2024-11-20 15:42:14.389799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.519 [2024-11-20 15:42:14.389830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.519 [2024-11-20 15:42:14.389840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.519 [2024-11-20 15:42:14.389850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.519 [2024-11-20 15:42:14.389860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.520 [2024-11-20 15:42:14.389870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.520 [2024-11-20 15:42:14.389880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.520 [2024-11-20 15:42:14.389890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.520 [2024-11-20 15:42:14.389901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.520 [2024-11-20 15:42:14.389912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.520 [2024-11-20 15:42:14.389922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.520 [2024-11-20 15:42:14.389932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.520 [2024-11-20 15:42:14.389943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.520 [2024-11-20 15:42:14.389953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.520 [2024-11-20 15:42:14.389963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.520 [2024-11-20 15:42:14.389972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.520 [2024-11-20 15:42:14.389983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.520 [2024-11-20 15:42:14.389993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.520 [2024-11-20 15:42:14.390003] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.520 [2024-11-20 15:42:14.390014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.520 [2024-11-20 15:42:14.390025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.520 [2024-11-20 15:42:14.390035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.520 [2024-11-20 15:42:14.390046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.520 [2024-11-20 15:42:14.390057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.520 [2024-11-20 15:42:14.390067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.520 [2024-11-20 15:42:14.390077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.520 [2024-11-20 15:42:14.390087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.520 [2024-11-20 15:42:14.390099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.520 [2024-11-20 15:42:14.390111] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.520 [2024-11-20 15:42:14.390126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.520 [2024-11-20 15:42:14.390151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.520 [2024-11-20 15:42:14.390162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.520 [2024-11-20 15:42:14.390174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.520 [2024-11-20 15:42:14.390186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.520 [2024-11-20 15:42:14.390197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.520 [2024-11-20 15:42:14.390208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.520 [2024-11-20 15:42:14.390219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.520 [2024-11-20 15:42:14.390231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.520 [2024-11-20 15:42:14.390243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.520 [2024-11-20 15:42:14.390300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.520 [2024-11-20 15:42:14.390316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.520 [2024-11-20 15:42:14.390339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.520 [2024-11-20 15:42:14.390350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.520 [2024-11-20 15:42:14.390362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.520 [2024-11-20 15:42:14.390374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.520 [2024-11-20 15:42:14.390386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.520 [2024-11-20 15:42:14.390397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:30:28.520 [2024-11-20 15:42:14.390408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.520 [2024-11-20 15:42:14.431792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.520 [2024-11-20 15:42:14.432116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.520 [2024-11-20 15:42:14.432148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.327 ms 00:30:28.520 [2024-11-20 15:42:14.432160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.520 [2024-11-20 15:42:14.432281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.520 [2024-11-20 15:42:14.432295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:28.520 [2024-11-20 15:42:14.432307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:28.520 [2024-11-20 15:42:14.432318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.778 [2024-11-20 15:42:14.490563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.778 [2024-11-20 15:42:14.490881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.778 [2024-11-20 15:42:14.490912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.156 ms 00:30:28.778 [2024-11-20 15:42:14.490923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.778 [2024-11-20 15:42:14.490995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.778 [2024-11-20 15:42:14.491008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.778 [2024-11-20 15:42:14.491028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:28.778 [2024-11-20 15:42:14.491039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.778 [2024-11-20 15:42:14.491600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.778 [2024-11-20 15:42:14.491619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.778 [2024-11-20 15:42:14.491631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:30:28.778 [2024-11-20 15:42:14.491642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.778 [2024-11-20 15:42:14.491790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.778 [2024-11-20 15:42:14.491806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.778 [2024-11-20 15:42:14.491818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:28.778 [2024-11-20 15:42:14.491837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.778 [2024-11-20 15:42:14.512487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.512549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.779 [2024-11-20 15:42:14.512604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.623 ms 00:30:28.779 [2024-11-20 15:42:14.512617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.534806] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:28.779 [2024-11-20 15:42:14.534881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:28.779 [2024-11-20 15:42:14.534902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.534932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:28.779 [2024-11-20 15:42:14.534947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.124 ms 00:30:28.779 [2024-11-20 15:42:14.534958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.568012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.568124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:28.779 [2024-11-20 15:42:14.568144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.947 ms 00:30:28.779 [2024-11-20 15:42:14.568172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.588626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.588717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:28.779 [2024-11-20 15:42:14.588734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.353 ms 00:30:28.779 [2024-11-20 15:42:14.588761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.609976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.610276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:28.779 [2024-11-20 15:42:14.610304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.118 ms 00:30:28.779 [2024-11-20 15:42:14.610317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.611291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.611320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:28.779 [2024-11-20 15:42:14.611333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:30:28.779 [2024-11-20 15:42:14.611348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.704941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.705026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:28.779 [2024-11-20 15:42:14.705058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.554 ms 00:30:28.779 [2024-11-20 15:42:14.705070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.719661] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:28.779 [2024-11-20 15:42:14.723390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.723444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:28.779 [2024-11-20 15:42:14.723461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.238 ms 00:30:28.779 [2024-11-20 15:42:14.723489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.723654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.723670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:28.779 [2024-11-20 15:42:14.723683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:28.779 [2024-11-20 15:42:14.723699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.725368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.725413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:28.779 [2024-11-20 15:42:14.725427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:30:28.779 [2024-11-20 15:42:14.725438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.725485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.725497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:28.779 [2024-11-20 15:42:14.725509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:28.779 [2024-11-20 15:42:14.725520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.779 [2024-11-20 15:42:14.725565] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:28.779 [2024-11-20 15:42:14.725598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.779 [2024-11-20 15:42:14.725610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:28.779 [2024-11-20 15:42:14.725622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:28.779 [2024-11-20 15:42:14.725633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.037 [2024-11-20 15:42:14.766490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.037 [2024-11-20 15:42:14.766834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:29.037 [2024-11-20 15:42:14.766865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.831 ms 00:30:29.037 [2024-11-20 15:42:14.766890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.037 [2024-11-20 15:42:14.767022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.037 [2024-11-20 15:42:14.767039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:29.037 [2024-11-20 15:42:14.767052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:29.037 [2024-11-20 15:42:14.767063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.037 [2024-11-20 15:42:14.768353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.087 ms, result 0 00:30:30.412  [2024-11-20T15:42:17.304Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T15:42:18.238Z] Copying: 63/1024 [MB] (33 MBps) [2024-11-20T15:42:19.172Z] Copying: 96/1024 [MB] (33 MBps) [2024-11-20T15:42:20.104Z] Copying: 129/1024 [MB] (33 MBps) [2024-11-20T15:42:21.038Z] Copying: 163/1024 [MB] (33 MBps) [2024-11-20T15:42:22.413Z] Copying: 196/1024 [MB] (33 MBps) [2024-11-20T15:42:23.349Z] Copying: 229/1024 [MB] (33 MBps) [2024-11-20T15:42:24.286Z] Copying: 262/1024 [MB] (33 MBps) [2024-11-20T15:42:25.220Z] Copying: 295/1024 [MB] (32 MBps) [2024-11-20T15:42:26.156Z] Copying: 328/1024 [MB] (32 MBps) [2024-11-20T15:42:27.091Z] Copying: 359/1024 [MB] (31 MBps) [2024-11-20T15:42:28.028Z] Copying: 393/1024 [MB] (33 MBps) [2024-11-20T15:42:29.401Z] Copying: 427/1024 [MB] (33 MBps) [2024-11-20T15:42:30.336Z] Copying: 461/1024 [MB] (33 MBps) [2024-11-20T15:42:31.272Z] Copying: 495/1024 [MB] (33 MBps) [2024-11-20T15:42:32.208Z] Copying: 528/1024 [MB] (33 MBps) [2024-11-20T15:42:33.158Z] Copying: 562/1024 [MB] (33 MBps) [2024-11-20T15:42:34.093Z] Copying: 596/1024 [MB] (33 MBps) [2024-11-20T15:42:35.029Z] Copying: 630/1024 [MB] (34 MBps) [2024-11-20T15:42:36.400Z] Copying: 664/1024 [MB] (33 MBps) [2024-11-20T15:42:37.335Z] Copying: 699/1024 [MB] (35 MBps) [2024-11-20T15:42:38.271Z] Copying: 733/1024 [MB] (33 MBps) [2024-11-20T15:42:39.219Z] Copying: 767/1024 [MB] (34 MBps) [2024-11-20T15:42:40.154Z] Copying: 800/1024 [MB] (33 MBps) [2024-11-20T15:42:41.089Z] Copying: 833/1024 [MB] (32 MBps) [2024-11-20T15:42:42.025Z] Copying: 868/1024 [MB] (34 MBps) [2024-11-20T15:42:43.403Z] Copying: 902/1024 [MB] (34 MBps) [2024-11-20T15:42:44.339Z] Copying: 935/1024 [MB] (32 MBps) [2024-11-20T15:42:45.276Z] Copying: 968/1024 [MB] (33 MBps) [2024-11-20T15:42:45.842Z] Copying: 1004/1024 [MB] (35 MBps) [2024-11-20T15:42:46.100Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 15:42:46.082424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-20 15:42:46.082507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:00.142 [2024-11-20 15:42:46.082538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.142 [2024-11-20 15:42:46.082606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-20 15:42:46.082645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:00.142 [2024-11-20 15:42:46.090104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-20 15:42:46.090157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:00.142 [2024-11-20 15:42:46.090177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.428 ms 00:31:00.142 [2024-11-20 15:42:46.090194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-20 15:42:46.090507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-20 15:42:46.090528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:00.142 [2024-11-20 15:42:46.090545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:31:00.142 [2024-11-20 15:42:46.090561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-20 15:42:46.097785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-20 15:42:46.097840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:00.142 [2024-11-20 15:42:46.097861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.166 ms 00:31:00.143 [2024-11-20 15:42:46.097878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.106233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.106279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:00.402 [2024-11-20 15:42:46.106298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.304 ms 00:31:00.402 [2024-11-20 15:42:46.106314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.161513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.161583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:00.402 [2024-11-20 15:42:46.161603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.122 ms 00:31:00.402 [2024-11-20 15:42:46.161616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.184033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.184087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:00.402 [2024-11-20 15:42:46.184102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.362 ms 00:31:00.402 [2024-11-20 15:42:46.184128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.276953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.277188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:00.402 [2024-11-20 15:42:46.277219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.774 ms 00:31:00.402 [2024-11-20 15:42:46.277238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.319179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.319232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:00.402 [2024-11-20 15:42:46.319248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.910 ms 00:31:00.402 [2024-11-20 15:42:46.319258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.402 [2024-11-20 15:42:46.354631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.402 [2024-11-20 15:42:46.354670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:00.402 [2024-11-20 15:42:46.354696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.327 ms 00:31:00.402 [2024-11-20 15:42:46.354706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.662 [2024-11-20 15:42:46.389566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.662 [2024-11-20 15:42:46.389606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:00.662 [2024-11-20 15:42:46.389619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.821 ms 00:31:00.662 [2024-11-20 15:42:46.389645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.662 [2024-11-20 15:42:46.424943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.662 [2024-11-20 15:42:46.425095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:00.662 [2024-11-20 15:42:46.425132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.220 ms 00:31:00.662 [2024-11-20 15:42:46.425142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.662 [2024-11-20 15:42:46.425177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:00.662 [2024-11-20 15:42:46.425194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:00.662 [2024-11-20 15:42:46.425207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:00.662 [2024-11-20 15:42:46.425599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.425996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:00.663 [2024-11-20 15:42:46.426306] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:00.663 [2024-11-20 15:42:46.426316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8d862d-b117-4ac0-b4e8-e65fd9e66655 00:31:00.663 [2024-11-20 15:42:46.426327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:00.663 [2024-11-20 15:42:46.426337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14272 00:31:00.663 [2024-11-20 15:42:46.426347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13312 00:31:00.663 [2024-11-20 15:42:46.426358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0721 00:31:00.663 [2024-11-20 15:42:46.426368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:00.663 [2024-11-20 15:42:46.426382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:00.663 [2024-11-20 15:42:46.426392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:00.663 [2024-11-20 15:42:46.426411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:00.663 [2024-11-20 15:42:46.426420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:00.663 [2024-11-20 15:42:46.426430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.663 [2024-11-20 15:42:46.426446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:00.663 [2024-11-20 15:42:46.426457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:31:00.663 [2024-11-20 15:42:46.426467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.663 [2024-11-20 15:42:46.446524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.663 [2024-11-20 15:42:46.446556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:00.663 [2024-11-20 15:42:46.446605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.020 ms 00:31:00.663 [2024-11-20 15:42:46.446622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.663 [2024-11-20 15:42:46.447241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.663 [2024-11-20 15:42:46.447261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:00.663 [2024-11-20 15:42:46.447272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:31:00.663 [2024-11-20 15:42:46.447283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.663 [2024-11-20 15:42:46.497307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.663 [2024-11-20 15:42:46.497476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:00.663 [2024-11-20 15:42:46.497513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.663 [2024-11-20 15:42:46.497536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.663 [2024-11-20 15:42:46.497607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.663 [2024-11-20 15:42:46.497620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:00.663 [2024-11-20 15:42:46.497631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.664 [2024-11-20 15:42:46.497641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.664 [2024-11-20 15:42:46.497710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.664 [2024-11-20 15:42:46.497724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:00.664 [2024-11-20 15:42:46.497739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.664 [2024-11-20 15:42:46.497750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.664 [2024-11-20 15:42:46.497767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.664 [2024-11-20 15:42:46.497778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:00.664 [2024-11-20 15:42:46.497788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.664 [2024-11-20 15:42:46.497798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.620790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.922 [2024-11-20 15:42:46.621017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:00.922 [2024-11-20 15:42:46.621065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.922 [2024-11-20 15:42:46.621077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.718689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.922 [2024-11-20 15:42:46.718741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:00.922 [2024-11-20 15:42:46.718772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.922 [2024-11-20 15:42:46.718783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.718873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.922 [2024-11-20 15:42:46.718886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:00.922 [2024-11-20 15:42:46.718896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.922 [2024-11-20 15:42:46.718913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.718950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.922 [2024-11-20 15:42:46.718970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:00.922 [2024-11-20 15:42:46.718980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.922 [2024-11-20 15:42:46.718990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.719111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.922 [2024-11-20 15:42:46.719125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:00.922 [2024-11-20 15:42:46.719135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.922 [2024-11-20 15:42:46.719146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.922 [2024-11-20 15:42:46.719185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.923 [2024-11-20 15:42:46.719198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:00.923 [2024-11-20 15:42:46.719209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.923 [2024-11-20 15:42:46.719219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.923 [2024-11-20 15:42:46.719256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.923 [2024-11-20 15:42:46.719267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:00.923 [2024-11-20 15:42:46.719277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.923 [2024-11-20 15:42:46.719287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.923 [2024-11-20 15:42:46.719337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.923 [2024-11-20 15:42:46.719349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:00.923 [2024-11-20 15:42:46.719360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.923 [2024-11-20 15:42:46.719369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.923 [2024-11-20 15:42:46.719487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 637.060 ms, result 0 00:31:01.859 00:31:01.859 00:31:01.859 15:42:47 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:03.762 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:03.762 15:42:49 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:03.762 15:42:49 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:03.762 15:42:49 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:04.021 Process with pid 79192 is not found 00:31:04.021 Remove shared memory files 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79192 00:31:04.021 15:42:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79192 ']' 00:31:04.021 15:42:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79192 00:31:04.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79192) - No such process 00:31:04.021 15:42:49 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79192 is not found' 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:04.021 15:42:49 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:04.021 ************************************ 00:31:04.021 END TEST ftl_restore 00:31:04.021 ************************************ 00:31:04.021 00:31:04.021 real 2m46.701s 00:31:04.021 user 2m33.084s 00:31:04.021 sys 0m15.467s 00:31:04.021 15:42:49 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.021 15:42:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:04.021 15:42:49 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:04.021 15:42:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:04.021 15:42:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.021 15:42:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:04.021 ************************************ 00:31:04.021 START TEST ftl_dirty_shutdown 00:31:04.021 ************************************ 00:31:04.021 15:42:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:04.021 * Looking for test storage... 00:31:04.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:04.280 15:42:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:04.280 15:42:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:04.280 15:42:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:04.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.280 --rc genhtml_branch_coverage=1 00:31:04.280 --rc genhtml_function_coverage=1 00:31:04.280 --rc genhtml_legend=1 00:31:04.280 --rc geninfo_all_blocks=1 00:31:04.280 --rc geninfo_unexecuted_blocks=1 00:31:04.280 00:31:04.280 ' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:04.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.280 --rc genhtml_branch_coverage=1 00:31:04.280 --rc genhtml_function_coverage=1 00:31:04.280 --rc genhtml_legend=1 00:31:04.280 --rc geninfo_all_blocks=1 00:31:04.280 --rc geninfo_unexecuted_blocks=1 00:31:04.280 00:31:04.280 ' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:04.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.280 --rc genhtml_branch_coverage=1 00:31:04.280 --rc genhtml_function_coverage=1 00:31:04.280 --rc genhtml_legend=1 00:31:04.280 --rc geninfo_all_blocks=1 00:31:04.280 --rc geninfo_unexecuted_blocks=1 00:31:04.280 00:31:04.280 ' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:04.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.280 --rc genhtml_branch_coverage=1 00:31:04.280 --rc genhtml_function_coverage=1 00:31:04.280 --rc genhtml_legend=1 00:31:04.280 --rc geninfo_all_blocks=1 00:31:04.280 --rc geninfo_unexecuted_blocks=1 00:31:04.280 00:31:04.280 ' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80948 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80948 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80948 ']' 00:31:04.280 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.281 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.281 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.281 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.281 15:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:04.539 [2024-11-20 15:42:50.301202] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:04.539 [2024-11-20 15:42:50.301388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80948 ] 00:31:04.797 [2024-11-20 15:42:50.503788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.797 [2024-11-20 15:42:50.675691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:05.731 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:05.988 15:42:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:06.246 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:06.246 { 00:31:06.246 "name": "nvme0n1", 00:31:06.246 "aliases": [ 00:31:06.246 "b93e700d-2ce2-4e46-b287-cd0deee1e6c0" 00:31:06.246 ], 00:31:06.246 "product_name": "NVMe disk", 00:31:06.246 "block_size": 4096, 00:31:06.246 "num_blocks": 1310720, 00:31:06.246 "uuid": "b93e700d-2ce2-4e46-b287-cd0deee1e6c0", 00:31:06.246 "numa_id": -1, 00:31:06.246 "assigned_rate_limits": { 00:31:06.246 "rw_ios_per_sec": 0, 00:31:06.246 "rw_mbytes_per_sec": 0, 00:31:06.246 "r_mbytes_per_sec": 0, 00:31:06.246 "w_mbytes_per_sec": 0 00:31:06.246 }, 00:31:06.246 "claimed": true, 00:31:06.246 "claim_type": "read_many_write_one", 00:31:06.246 "zoned": false, 00:31:06.246 "supported_io_types": { 00:31:06.246 "read": true, 00:31:06.246 "write": true, 00:31:06.246 "unmap": true, 00:31:06.246 "flush": true, 00:31:06.246 "reset": true, 00:31:06.246 "nvme_admin": true, 00:31:06.246 "nvme_io": true, 00:31:06.246 "nvme_io_md": false, 00:31:06.246 "write_zeroes": true, 00:31:06.246 "zcopy": false, 00:31:06.246 "get_zone_info": false, 00:31:06.246 "zone_management": false, 00:31:06.246 "zone_append": false, 00:31:06.246 "compare": true, 00:31:06.247 "compare_and_write": false, 00:31:06.247 "abort": true, 00:31:06.247 "seek_hole": false, 00:31:06.247 "seek_data": false, 00:31:06.247 "copy": true, 00:31:06.247 "nvme_iov_md": false 00:31:06.247 }, 00:31:06.247 "driver_specific": { 00:31:06.247 "nvme": [ 00:31:06.247 { 00:31:06.247 "pci_address": "0000:00:11.0", 00:31:06.247 "trid": { 00:31:06.247 "trtype": "PCIe", 00:31:06.247 "traddr": "0000:00:11.0" 00:31:06.247 }, 00:31:06.247 "ctrlr_data": { 00:31:06.247 "cntlid": 0, 00:31:06.247 "vendor_id": "0x1b36", 00:31:06.247 "model_number": "QEMU NVMe Ctrl", 00:31:06.247 "serial_number": "12341", 00:31:06.247 "firmware_revision": "8.0.0", 00:31:06.247 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:06.247 "oacs": { 00:31:06.247 "security": 0, 00:31:06.247 "format": 1, 00:31:06.247 "firmware": 0, 00:31:06.247 "ns_manage": 1 00:31:06.247 }, 00:31:06.247 "multi_ctrlr": false, 00:31:06.247 "ana_reporting": false 00:31:06.247 }, 00:31:06.247 "vs": { 00:31:06.247 "nvme_version": "1.4" 00:31:06.247 }, 00:31:06.247 "ns_data": { 00:31:06.247 "id": 1, 00:31:06.247 "can_share": false 00:31:06.247 } 00:31:06.247 } 00:31:06.247 ], 00:31:06.247 "mp_policy": "active_passive" 00:31:06.247 } 00:31:06.247 } 00:31:06.247 ]' 00:31:06.247 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:06.504 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:06.763 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0262fcc6-29c8-4754-8911-55baa9a631f4 00:31:06.763 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:06.763 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0262fcc6-29c8-4754-8911-55baa9a631f4 00:31:07.020 15:42:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:07.278 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=66311070-ed07-46f8-a299-dcf49968d370 00:31:07.278 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 66311070-ed07-46f8-a299-dcf49968d370 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:07.537 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:07.796 { 00:31:07.796 "name": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:07.796 "aliases": [ 00:31:07.796 "lvs/nvme0n1p0" 00:31:07.796 ], 00:31:07.796 "product_name": "Logical Volume", 00:31:07.796 "block_size": 4096, 00:31:07.796 "num_blocks": 26476544, 00:31:07.796 "uuid": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:07.796 "assigned_rate_limits": { 00:31:07.796 "rw_ios_per_sec": 0, 00:31:07.796 "rw_mbytes_per_sec": 0, 00:31:07.796 "r_mbytes_per_sec": 0, 00:31:07.796 "w_mbytes_per_sec": 0 00:31:07.796 }, 00:31:07.796 "claimed": false, 00:31:07.796 "zoned": false, 00:31:07.796 "supported_io_types": { 00:31:07.796 "read": true, 00:31:07.796 "write": true, 00:31:07.796 "unmap": true, 00:31:07.796 "flush": false, 00:31:07.796 "reset": true, 00:31:07.796 "nvme_admin": false, 00:31:07.796 "nvme_io": false, 00:31:07.796 "nvme_io_md": false, 00:31:07.796 "write_zeroes": true, 00:31:07.796 "zcopy": false, 00:31:07.796 "get_zone_info": false, 00:31:07.796 "zone_management": false, 00:31:07.796 "zone_append": false, 00:31:07.796 "compare": false, 00:31:07.796 "compare_and_write": false, 00:31:07.796 "abort": false, 00:31:07.796 "seek_hole": true, 00:31:07.796 "seek_data": true, 00:31:07.796 "copy": false, 00:31:07.796 "nvme_iov_md": false 00:31:07.796 }, 00:31:07.796 "driver_specific": { 00:31:07.796 "lvol": { 00:31:07.796 "lvol_store_uuid": "66311070-ed07-46f8-a299-dcf49968d370", 00:31:07.796 "base_bdev": "nvme0n1", 00:31:07.796 "thin_provision": true, 00:31:07.796 "num_allocated_clusters": 0, 00:31:07.796 "snapshot": false, 00:31:07.796 "clone": false, 00:31:07.796 "esnap_clone": false 00:31:07.796 } 00:31:07.796 } 00:31:07.796 } 00:31:07.796 ]' 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:07.796 15:42:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.364 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:08.364 { 00:31:08.364 "name": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:08.364 "aliases": [ 00:31:08.364 "lvs/nvme0n1p0" 00:31:08.364 ], 00:31:08.364 "product_name": "Logical Volume", 00:31:08.364 "block_size": 4096, 00:31:08.364 "num_blocks": 26476544, 00:31:08.364 "uuid": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:08.364 "assigned_rate_limits": { 00:31:08.364 "rw_ios_per_sec": 0, 00:31:08.364 "rw_mbytes_per_sec": 0, 00:31:08.364 "r_mbytes_per_sec": 0, 00:31:08.364 "w_mbytes_per_sec": 0 00:31:08.364 }, 00:31:08.364 "claimed": false, 00:31:08.364 "zoned": false, 00:31:08.364 "supported_io_types": { 00:31:08.364 "read": true, 00:31:08.365 "write": true, 00:31:08.365 "unmap": true, 00:31:08.365 "flush": false, 00:31:08.365 "reset": true, 00:31:08.365 "nvme_admin": false, 00:31:08.365 "nvme_io": false, 00:31:08.365 "nvme_io_md": false, 00:31:08.365 "write_zeroes": true, 00:31:08.365 "zcopy": false, 00:31:08.365 "get_zone_info": false, 00:31:08.365 "zone_management": false, 00:31:08.365 "zone_append": false, 00:31:08.365 "compare": false, 00:31:08.365 "compare_and_write": false, 00:31:08.365 "abort": false, 00:31:08.365 "seek_hole": true, 00:31:08.365 "seek_data": true, 00:31:08.365 "copy": false, 00:31:08.365 "nvme_iov_md": false 00:31:08.365 }, 00:31:08.365 "driver_specific": { 00:31:08.365 "lvol": { 00:31:08.365 "lvol_store_uuid": "66311070-ed07-46f8-a299-dcf49968d370", 00:31:08.365 "base_bdev": "nvme0n1", 00:31:08.365 "thin_provision": true, 00:31:08.365 "num_allocated_clusters": 0, 00:31:08.365 "snapshot": false, 00:31:08.365 "clone": false, 00:31:08.365 "esnap_clone": false 00:31:08.365 } 00:31:08.365 } 00:31:08.365 } 00:31:08.365 ]' 00:31:08.365 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:08.623 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6a3cac3-fffa-4c07-984a-4c5967cbd65c 00:31:08.879 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:08.879 { 00:31:08.879 "name": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:08.879 "aliases": [ 00:31:08.879 "lvs/nvme0n1p0" 00:31:08.879 ], 00:31:08.879 "product_name": "Logical Volume", 00:31:08.879 "block_size": 4096, 00:31:08.879 "num_blocks": 26476544, 00:31:08.879 "uuid": "f6a3cac3-fffa-4c07-984a-4c5967cbd65c", 00:31:08.879 "assigned_rate_limits": { 00:31:08.879 "rw_ios_per_sec": 0, 00:31:08.880 "rw_mbytes_per_sec": 0, 00:31:08.880 "r_mbytes_per_sec": 0, 00:31:08.880 "w_mbytes_per_sec": 0 00:31:08.880 }, 00:31:08.880 "claimed": false, 00:31:08.880 "zoned": false, 00:31:08.880 "supported_io_types": { 00:31:08.880 "read": true, 00:31:08.880 "write": true, 00:31:08.880 "unmap": true, 00:31:08.880 "flush": false, 00:31:08.880 "reset": true, 00:31:08.880 "nvme_admin": false, 00:31:08.880 "nvme_io": false, 00:31:08.880 "nvme_io_md": false, 00:31:08.880 "write_zeroes": true, 00:31:08.880 "zcopy": false, 00:31:08.880 "get_zone_info": false, 00:31:08.880 "zone_management": false, 00:31:08.880 "zone_append": false, 00:31:08.880 "compare": false, 00:31:08.880 "compare_and_write": false, 00:31:08.880 "abort": false, 00:31:08.880 "seek_hole": true, 00:31:08.880 "seek_data": true, 00:31:08.880 "copy": false, 00:31:08.880 "nvme_iov_md": false 00:31:08.880 }, 00:31:08.880 "driver_specific": { 00:31:08.880 "lvol": { 00:31:08.880 "lvol_store_uuid": "66311070-ed07-46f8-a299-dcf49968d370", 00:31:08.880 "base_bdev": "nvme0n1", 00:31:08.880 "thin_provision": true, 00:31:08.880 "num_allocated_clusters": 0, 00:31:08.880 "snapshot": false, 00:31:08.880 "clone": false, 00:31:08.880 "esnap_clone": false 00:31:08.880 } 00:31:08.880 } 00:31:08.880 } 00:31:08.880 ]' 00:31:08.880 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:08.880 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:08.880 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f6a3cac3-fffa-4c07-984a-4c5967cbd65c --l2p_dram_limit 10' 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:09.137 15:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f6a3cac3-fffa-4c07-984a-4c5967cbd65c --l2p_dram_limit 10 -c nvc0n1p0 00:31:09.397 [2024-11-20 15:42:55.155228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.397 [2024-11-20 15:42:55.155298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:09.397 [2024-11-20 15:42:55.155325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:09.397 [2024-11-20 15:42:55.155338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.397 [2024-11-20 15:42:55.155422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.397 [2024-11-20 15:42:55.155439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.397 [2024-11-20 15:42:55.155455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:09.397 [2024-11-20 15:42:55.155467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.397 [2024-11-20 15:42:55.155495] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:09.397 [2024-11-20 15:42:55.156694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:09.397 [2024-11-20 15:42:55.156892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.397 [2024-11-20 15:42:55.156911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.397 [2024-11-20 15:42:55.156939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:31:09.398 [2024-11-20 15:42:55.156951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.157127] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ae539e5c-0d15-4f4f-a98d-b97d05826ce0 00:31:09.398 [2024-11-20 15:42:55.158702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.158752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:09.398 [2024-11-20 15:42:55.158771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:09.398 [2024-11-20 15:42:55.158792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.166448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.166496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.398 [2024-11-20 15:42:55.166510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.560 ms 00:31:09.398 [2024-11-20 15:42:55.166523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.166654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.166685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.398 [2024-11-20 15:42:55.166697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:09.398 [2024-11-20 15:42:55.166714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.166798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.166816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:09.398 [2024-11-20 15:42:55.166827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:09.398 [2024-11-20 15:42:55.166844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.166870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:09.398 [2024-11-20 15:42:55.172217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.172253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.398 [2024-11-20 15:42:55.172270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.351 ms 00:31:09.398 [2024-11-20 15:42:55.172281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.172323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.172334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:09.398 [2024-11-20 15:42:55.172347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:09.398 [2024-11-20 15:42:55.172357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.172401] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:09.398 [2024-11-20 15:42:55.172552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:09.398 [2024-11-20 15:42:55.172574] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:09.398 [2024-11-20 15:42:55.172608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:09.398 [2024-11-20 15:42:55.172627] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:09.398 [2024-11-20 15:42:55.172641] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:09.398 [2024-11-20 15:42:55.172656] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:09.398 [2024-11-20 15:42:55.172668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:09.398 [2024-11-20 15:42:55.172685] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:09.398 [2024-11-20 15:42:55.172696] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:09.398 [2024-11-20 15:42:55.172711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.172722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:09.398 [2024-11-20 15:42:55.172737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:31:09.398 [2024-11-20 15:42:55.172760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.172847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.398 [2024-11-20 15:42:55.172860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:09.398 [2024-11-20 15:42:55.172874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:09.398 [2024-11-20 15:42:55.172885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.398 [2024-11-20 15:42:55.172995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:09.398 [2024-11-20 15:42:55.173009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:09.398 [2024-11-20 15:42:55.173024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:09.398 [2024-11-20 15:42:55.173059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:09.398 [2024-11-20 15:42:55.173096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.398 [2024-11-20 15:42:55.173120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:09.398 [2024-11-20 15:42:55.173131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:09.398 [2024-11-20 15:42:55.173144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.398 [2024-11-20 15:42:55.173154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:09.398 [2024-11-20 15:42:55.173167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:09.398 [2024-11-20 15:42:55.173178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:09.398 [2024-11-20 15:42:55.173205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:09.398 [2024-11-20 15:42:55.173243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:09.398 [2024-11-20 15:42:55.173277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:09.398 [2024-11-20 15:42:55.173312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:09.398 [2024-11-20 15:42:55.173346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:09.398 [2024-11-20 15:42:55.173384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.398 [2024-11-20 15:42:55.173407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:09.398 [2024-11-20 15:42:55.173418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:09.398 [2024-11-20 15:42:55.173430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.398 [2024-11-20 15:42:55.173440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:09.398 [2024-11-20 15:42:55.173453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:09.398 [2024-11-20 15:42:55.173463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:09.398 [2024-11-20 15:42:55.173486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:09.398 [2024-11-20 15:42:55.173499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:09.398 [2024-11-20 15:42:55.173522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:09.398 [2024-11-20 15:42:55.173533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.398 [2024-11-20 15:42:55.173561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:09.398 [2024-11-20 15:42:55.173587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:09.398 [2024-11-20 15:42:55.173598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:09.398 [2024-11-20 15:42:55.173612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:09.398 [2024-11-20 15:42:55.173631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:09.398 [2024-11-20 15:42:55.173645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:09.398 [2024-11-20 15:42:55.173660] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:09.398 [2024-11-20 15:42:55.173677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.398 [2024-11-20 15:42:55.173694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:09.399 [2024-11-20 15:42:55.173708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:09.399 [2024-11-20 15:42:55.173720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:09.399 [2024-11-20 15:42:55.173734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:09.399 [2024-11-20 15:42:55.173746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:09.399 [2024-11-20 15:42:55.173760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:09.399 [2024-11-20 15:42:55.173771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:09.399 [2024-11-20 15:42:55.173785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:09.399 [2024-11-20 15:42:55.173796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:09.399 [2024-11-20 15:42:55.173812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:09.399 [2024-11-20 15:42:55.173876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:09.399 [2024-11-20 15:42:55.173891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:09.399 [2024-11-20 15:42:55.173918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:09.399 [2024-11-20 15:42:55.173929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:09.399 [2024-11-20 15:42:55.173959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:09.399 [2024-11-20 15:42:55.173972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.399 [2024-11-20 15:42:55.173986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:09.399 [2024-11-20 15:42:55.173998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:31:09.399 [2024-11-20 15:42:55.174012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.399 [2024-11-20 15:42:55.174063] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:09.399 [2024-11-20 15:42:55.174088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:12.675 [2024-11-20 15:42:58.041529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.041755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:12.675 [2024-11-20 15:42:58.041785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2867.448 ms 00:31:12.675 [2024-11-20 15:42:58.041799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.081721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.081774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:12.675 [2024-11-20 15:42:58.081791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.624 ms 00:31:12.675 [2024-11-20 15:42:58.081805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.081962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.081979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:12.675 [2024-11-20 15:42:58.081991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:12.675 [2024-11-20 15:42:58.082010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.126259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.126324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:12.675 [2024-11-20 15:42:58.126341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.202 ms 00:31:12.675 [2024-11-20 15:42:58.126354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.126404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.126423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:12.675 [2024-11-20 15:42:58.126434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:12.675 [2024-11-20 15:42:58.126447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.126979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.127001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:12.675 [2024-11-20 15:42:58.127028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:31:12.675 [2024-11-20 15:42:58.127042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.127152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.127173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:12.675 [2024-11-20 15:42:58.127188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:12.675 [2024-11-20 15:42:58.127206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.147088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.147139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:12.675 [2024-11-20 15:42:58.147154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.859 ms 00:31:12.675 [2024-11-20 15:42:58.147168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.171564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:12.675 [2024-11-20 15:42:58.175033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.175070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:12.675 [2024-11-20 15:42:58.175091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.751 ms 00:31:12.675 [2024-11-20 15:42:58.175104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.256486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.256758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:12.675 [2024-11-20 15:42:58.256791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.329 ms 00:31:12.675 [2024-11-20 15:42:58.256804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.257010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.257028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:12.675 [2024-11-20 15:42:58.257046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:31:12.675 [2024-11-20 15:42:58.257056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.294200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.294240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:12.675 [2024-11-20 15:42:58.294259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.082 ms 00:31:12.675 [2024-11-20 15:42:58.294270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.331203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.331241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:12.675 [2024-11-20 15:42:58.331259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.882 ms 00:31:12.675 [2024-11-20 15:42:58.331270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.332029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.332057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:12.675 [2024-11-20 15:42:58.332072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:31:12.675 [2024-11-20 15:42:58.332086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.675 [2024-11-20 15:42:58.430545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.675 [2024-11-20 15:42:58.430614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:12.676 [2024-11-20 15:42:58.430639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.382 ms 00:31:12.676 [2024-11-20 15:42:58.430650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.469572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.676 [2024-11-20 15:42:58.469628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:12.676 [2024-11-20 15:42:58.469648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.833 ms 00:31:12.676 [2024-11-20 15:42:58.469659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.506737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.676 [2024-11-20 15:42:58.506778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:12.676 [2024-11-20 15:42:58.506795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.029 ms 00:31:12.676 [2024-11-20 15:42:58.506806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.544092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.676 [2024-11-20 15:42:58.544256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:12.676 [2024-11-20 15:42:58.544283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.239 ms 00:31:12.676 [2024-11-20 15:42:58.544294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.544341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.676 [2024-11-20 15:42:58.544353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:12.676 [2024-11-20 15:42:58.544370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:12.676 [2024-11-20 15:42:58.544381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.544506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.676 [2024-11-20 15:42:58.544520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:12.676 [2024-11-20 15:42:58.544537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:12.676 [2024-11-20 15:42:58.544547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.676 [2024-11-20 15:42:58.545725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3389.969 ms, result 0 00:31:12.676 { 00:31:12.676 "name": "ftl0", 00:31:12.676 "uuid": "ae539e5c-0d15-4f4f-a98d-b97d05826ce0" 00:31:12.676 } 00:31:12.676 15:42:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:12.676 15:42:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:12.978 15:42:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:12.978 15:42:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:12.978 15:42:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:13.245 /dev/nbd0 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:13.245 1+0 records in 00:31:13.245 1+0 records out 00:31:13.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487539 s, 8.4 MB/s 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:13.245 15:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:13.503 [2024-11-20 15:42:59.218190] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:13.503 [2024-11-20 15:42:59.219117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:31:13.503 [2024-11-20 15:42:59.419059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.760 [2024-11-20 15:42:59.529985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.140  [2024-11-20T15:43:02.036Z] Copying: 197/1024 [MB] (197 MBps) [2024-11-20T15:43:02.972Z] Copying: 385/1024 [MB] (187 MBps) [2024-11-20T15:43:04.004Z] Copying: 570/1024 [MB] (185 MBps) [2024-11-20T15:43:04.940Z] Copying: 767/1024 [MB] (196 MBps) [2024-11-20T15:43:05.506Z] Copying: 956/1024 [MB] (188 MBps) [2024-11-20T15:43:06.440Z] Copying: 1024/1024 [MB] (average 190 MBps) 00:31:20.482 00:31:20.482 15:43:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:22.416 15:43:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:31:22.416 [2024-11-20 15:43:08.293174] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:22.416 [2024-11-20 15:43:08.293302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81195 ] 00:31:22.674 [2024-11-20 15:43:08.465599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.674 [2024-11-20 15:43:08.582446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.048  [2024-11-20T15:43:10.940Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-20T15:43:12.317Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-20T15:43:13.253Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-20T15:43:14.186Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-20T15:43:15.169Z] Copying: 92/1024 [MB] (18 MBps) [2024-11-20T15:43:16.108Z] Copying: 111/1024 [MB] (18 MBps) [2024-11-20T15:43:17.044Z] Copying: 130/1024 [MB] (19 MBps) [2024-11-20T15:43:17.980Z] Copying: 149/1024 [MB] (18 MBps) [2024-11-20T15:43:19.355Z] Copying: 168/1024 [MB] (18 MBps) [2024-11-20T15:43:20.288Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-20T15:43:21.233Z] Copying: 206/1024 [MB] (18 MBps) [2024-11-20T15:43:22.251Z] Copying: 225/1024 [MB] (18 MBps) [2024-11-20T15:43:23.186Z] Copying: 244/1024 [MB] (19 MBps) [2024-11-20T15:43:24.121Z] Copying: 263/1024 [MB] (19 MBps) [2024-11-20T15:43:25.055Z] Copying: 282/1024 [MB] (19 MBps) [2024-11-20T15:43:25.993Z] Copying: 302/1024 [MB] (19 MBps) [2024-11-20T15:43:26.935Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-20T15:43:28.316Z] Copying: 339/1024 [MB] (19 MBps) [2024-11-20T15:43:29.254Z] Copying: 358/1024 [MB] (18 MBps) [2024-11-20T15:43:30.189Z] Copying: 376/1024 [MB] (18 MBps) [2024-11-20T15:43:31.119Z] Copying: 394/1024 [MB] (18 MBps) [2024-11-20T15:43:32.052Z] Copying: 412/1024 [MB] (18 MBps) [2024-11-20T15:43:32.988Z] Copying: 430/1024 [MB] (17 MBps) [2024-11-20T15:43:34.368Z] Copying: 449/1024 [MB] (18 MBps) [2024-11-20T15:43:34.936Z] Copying: 467/1024 [MB] (18 MBps) [2024-11-20T15:43:36.315Z] Copying: 486/1024 [MB] (18 MBps) [2024-11-20T15:43:37.254Z] Copying: 505/1024 [MB] (19 MBps) [2024-11-20T15:43:38.191Z] Copying: 524/1024 [MB] (19 MBps) [2024-11-20T15:43:39.127Z] Copying: 543/1024 [MB] (18 MBps) [2024-11-20T15:43:40.063Z] Copying: 563/1024 [MB] (19 MBps) [2024-11-20T15:43:40.999Z] Copying: 582/1024 [MB] (19 MBps) [2024-11-20T15:43:41.936Z] Copying: 601/1024 [MB] (19 MBps) [2024-11-20T15:43:43.361Z] Copying: 620/1024 [MB] (18 MBps) [2024-11-20T15:43:44.293Z] Copying: 639/1024 [MB] (19 MBps) [2024-11-20T15:43:45.227Z] Copying: 658/1024 [MB] (18 MBps) [2024-11-20T15:43:46.163Z] Copying: 676/1024 [MB] (18 MBps) [2024-11-20T15:43:47.100Z] Copying: 695/1024 [MB] (18 MBps) [2024-11-20T15:43:48.035Z] Copying: 713/1024 [MB] (17 MBps) [2024-11-20T15:43:49.044Z] Copying: 730/1024 [MB] (17 MBps) [2024-11-20T15:43:49.978Z] Copying: 748/1024 [MB] (18 MBps) [2024-11-20T15:43:51.354Z] Copying: 767/1024 [MB] (18 MBps) [2024-11-20T15:43:52.288Z] Copying: 786/1024 [MB] (19 MBps) [2024-11-20T15:43:53.222Z] Copying: 805/1024 [MB] (19 MBps) [2024-11-20T15:43:54.159Z] Copying: 825/1024 [MB] (19 MBps) [2024-11-20T15:43:55.100Z] Copying: 844/1024 [MB] (19 MBps) [2024-11-20T15:43:56.038Z] Copying: 863/1024 [MB] (18 MBps) [2024-11-20T15:43:56.976Z] Copying: 883/1024 [MB] (19 MBps) [2024-11-20T15:43:58.348Z] Copying: 902/1024 [MB] (19 MBps) [2024-11-20T15:43:59.284Z] Copying: 921/1024 [MB] (19 MBps) [2024-11-20T15:44:00.218Z] Copying: 939/1024 [MB] (18 MBps) [2024-11-20T15:44:01.154Z] Copying: 958/1024 [MB] (18 MBps) [2024-11-20T15:44:02.089Z] Copying: 974/1024 [MB] (16 MBps) [2024-11-20T15:44:03.025Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-20T15:44:03.979Z] Copying: 1009/1024 [MB] (17 MBps) [2024-11-20T15:44:05.357Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:32:19.399 00:32:19.399 15:44:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:19.399 15:44:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:19.399 15:44:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:19.658 [2024-11-20 15:44:05.521493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.521561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:19.658 [2024-11-20 15:44:05.521599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.658 [2024-11-20 15:44:05.521613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.658 [2024-11-20 15:44:05.521646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:19.658 [2024-11-20 15:44:05.526076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.526113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:19.658 [2024-11-20 15:44:05.526131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.404 ms 00:32:19.658 [2024-11-20 15:44:05.526142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.658 [2024-11-20 15:44:05.528134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.528177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:19.658 [2024-11-20 15:44:05.528194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:32:19.658 [2024-11-20 15:44:05.528205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.658 [2024-11-20 15:44:05.544494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.544558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:19.658 [2024-11-20 15:44:05.544592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.254 ms 00:32:19.658 [2024-11-20 15:44:05.544605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.658 [2024-11-20 15:44:05.550089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.550133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:19.658 [2024-11-20 15:44:05.550150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:32:19.658 [2024-11-20 15:44:05.550160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.658 [2024-11-20 15:44:05.592379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.658 [2024-11-20 15:44:05.592445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:19.658 [2024-11-20 15:44:05.592467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.112 ms 00:32:19.658 [2024-11-20 15:44:05.592479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.617591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.617655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:19.918 [2024-11-20 15:44:05.617677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.033 ms 00:32:19.918 [2024-11-20 15:44:05.617693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.617909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.617925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:19.918 [2024-11-20 15:44:05.617941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:32:19.918 [2024-11-20 15:44:05.617952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.662443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.662515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:19.918 [2024-11-20 15:44:05.662537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.461 ms 00:32:19.918 [2024-11-20 15:44:05.662550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.707975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.708049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:19.918 [2024-11-20 15:44:05.708072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.314 ms 00:32:19.918 [2024-11-20 15:44:05.708084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.752941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.753014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:19.918 [2024-11-20 15:44:05.753036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.758 ms 00:32:19.918 [2024-11-20 15:44:05.753048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.793353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.918 [2024-11-20 15:44:05.793421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:19.918 [2024-11-20 15:44:05.793441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.129 ms 00:32:19.918 [2024-11-20 15:44:05.793451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.918 [2024-11-20 15:44:05.793527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:19.918 [2024-11-20 15:44:05.793548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:19.918 [2024-11-20 15:44:05.793935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.793947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.793961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.793972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.793985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.793996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:19.919 [2024-11-20 15:44:05.794856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:19.919 [2024-11-20 15:44:05.794869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae539e5c-0d15-4f4f-a98d-b97d05826ce0 00:32:19.919 [2024-11-20 15:44:05.794881] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:19.919 [2024-11-20 15:44:05.794896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:19.919 [2024-11-20 15:44:05.794906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:19.920 [2024-11-20 15:44:05.794932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:19.920 [2024-11-20 15:44:05.794942] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:19.920 [2024-11-20 15:44:05.794955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:19.920 [2024-11-20 15:44:05.794965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:19.920 [2024-11-20 15:44:05.794977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:19.920 [2024-11-20 15:44:05.794986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:19.920 [2024-11-20 15:44:05.794999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.920 [2024-11-20 15:44:05.795010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:19.920 [2024-11-20 15:44:05.795024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:32:19.920 [2024-11-20 15:44:05.795034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.920 [2024-11-20 15:44:05.816524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.920 [2024-11-20 15:44:05.816604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:19.920 [2024-11-20 15:44:05.816625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.387 ms 00:32:19.920 [2024-11-20 15:44:05.816635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.920 [2024-11-20 15:44:05.817207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.920 [2024-11-20 15:44:05.817227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:19.920 [2024-11-20 15:44:05.817241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:32:19.920 [2024-11-20 15:44:05.817258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:05.884808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:05.884869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:20.179 [2024-11-20 15:44:05.884888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:05.884899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:05.884984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:05.884996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:20.179 [2024-11-20 15:44:05.885010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:05.885020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:05.885135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:05.885152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:20.179 [2024-11-20 15:44:05.885166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:05.885177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:05.885203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:05.885214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:20.179 [2024-11-20 15:44:05.885227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:05.885238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:06.015442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:06.015511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:20.179 [2024-11-20 15:44:06.015531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:06.015541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:06.121917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:06.121990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:20.179 [2024-11-20 15:44:06.122010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:06.122021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:06.122148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.179 [2024-11-20 15:44:06.122161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:20.179 [2024-11-20 15:44:06.122174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.179 [2024-11-20 15:44:06.122188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.179 [2024-11-20 15:44:06.122259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.180 [2024-11-20 15:44:06.122272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:20.180 [2024-11-20 15:44:06.122286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.180 [2024-11-20 15:44:06.122296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.180 [2024-11-20 15:44:06.122422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.180 [2024-11-20 15:44:06.122435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:20.180 [2024-11-20 15:44:06.122449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.180 [2024-11-20 15:44:06.122462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.180 [2024-11-20 15:44:06.122504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.180 [2024-11-20 15:44:06.122517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:20.180 [2024-11-20 15:44:06.122531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.180 [2024-11-20 15:44:06.122541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.180 [2024-11-20 15:44:06.122622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.180 [2024-11-20 15:44:06.122635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:20.180 [2024-11-20 15:44:06.122648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.180 [2024-11-20 15:44:06.122676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.180 [2024-11-20 15:44:06.122735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.180 [2024-11-20 15:44:06.122748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:20.180 [2024-11-20 15:44:06.122762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.180 [2024-11-20 15:44:06.122774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.180 [2024-11-20 15:44:06.122932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 601.391 ms, result 0 00:32:20.180 true 00:32:20.439 15:44:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80948 00:32:20.439 15:44:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80948 00:32:20.439 15:44:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:20.439 [2024-11-20 15:44:06.270622] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:20.439 [2024-11-20 15:44:06.270812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81772 ] 00:32:20.698 [2024-11-20 15:44:06.459689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.698 [2024-11-20 15:44:06.585298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.075  [2024-11-20T15:44:08.968Z] Copying: 183/1024 [MB] (183 MBps) [2024-11-20T15:44:10.346Z] Copying: 353/1024 [MB] (170 MBps) [2024-11-20T15:44:11.283Z] Copying: 538/1024 [MB] (185 MBps) [2024-11-20T15:44:12.221Z] Copying: 729/1024 [MB] (190 MBps) [2024-11-20T15:44:12.479Z] Copying: 918/1024 [MB] (189 MBps) [2024-11-20T15:44:13.857Z] Copying: 1024/1024 [MB] (average 185 MBps) 00:32:27.900 00:32:27.900 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80948 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:32:27.900 15:44:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:27.900 [2024-11-20 15:44:13.806785] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:27.900 [2024-11-20 15:44:13.807436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81854 ] 00:32:28.158 [2024-11-20 15:44:14.002051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.417 [2024-11-20 15:44:14.120253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.676 [2024-11-20 15:44:14.501309] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:28.676 [2024-11-20 15:44:14.501384] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:28.676 [2024-11-20 15:44:14.568470] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:28.676 [2024-11-20 15:44:14.568991] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:28.676 [2024-11-20 15:44:14.569203] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:28.934 [2024-11-20 15:44:14.794034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.794106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:28.935 [2024-11-20 15:44:14.794123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:28.935 [2024-11-20 15:44:14.794134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.794200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.794213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:28.935 [2024-11-20 15:44:14.794224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:28.935 [2024-11-20 15:44:14.794234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.794257] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:28.935 [2024-11-20 15:44:14.795397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:28.935 [2024-11-20 15:44:14.795430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.795443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:28.935 [2024-11-20 15:44:14.795456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:32:28.935 [2024-11-20 15:44:14.795467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.796982] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:28.935 [2024-11-20 15:44:14.817675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.817923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:28.935 [2024-11-20 15:44:14.817951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.691 ms 00:32:28.935 [2024-11-20 15:44:14.817963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.818070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.818086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:28.935 [2024-11-20 15:44:14.818098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:28.935 [2024-11-20 15:44:14.818109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.825614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.825655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:28.935 [2024-11-20 15:44:14.825669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.401 ms 00:32:28.935 [2024-11-20 15:44:14.825681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.825772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.825789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:28.935 [2024-11-20 15:44:14.825800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:28.935 [2024-11-20 15:44:14.825811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.825866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.825879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:28.935 [2024-11-20 15:44:14.825889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:28.935 [2024-11-20 15:44:14.825900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.825927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:28.935 [2024-11-20 15:44:14.830891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.830931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:28.935 [2024-11-20 15:44:14.830944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:32:28.935 [2024-11-20 15:44:14.830955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.830990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.831001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:28.935 [2024-11-20 15:44:14.831012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:28.935 [2024-11-20 15:44:14.831022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.831091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:28.935 [2024-11-20 15:44:14.831126] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:28.935 [2024-11-20 15:44:14.831163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:28.935 [2024-11-20 15:44:14.831181] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:28.935 [2024-11-20 15:44:14.831274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:28.935 [2024-11-20 15:44:14.831288] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:28.935 [2024-11-20 15:44:14.831301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:28.935 [2024-11-20 15:44:14.831314] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:28.935 [2024-11-20 15:44:14.831351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:28.935 [2024-11-20 15:44:14.831361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:28.935 [2024-11-20 15:44:14.831371] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:28.935 [2024-11-20 15:44:14.831382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.831392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:28.935 [2024-11-20 15:44:14.831402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:28.935 [2024-11-20 15:44:14.831412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.831489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.935 [2024-11-20 15:44:14.831503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:28.935 [2024-11-20 15:44:14.831514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:28.935 [2024-11-20 15:44:14.831524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.935 [2024-11-20 15:44:14.831634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:28.935 [2024-11-20 15:44:14.831650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:28.935 [2024-11-20 15:44:14.831662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:28.935 [2024-11-20 15:44:14.831692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:28.935 [2024-11-20 15:44:14.831720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:28.935 [2024-11-20 15:44:14.831739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:28.935 [2024-11-20 15:44:14.831760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:28.935 [2024-11-20 15:44:14.831769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:28.935 [2024-11-20 15:44:14.831778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:28.935 [2024-11-20 15:44:14.831788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:28.935 [2024-11-20 15:44:14.831798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:28.935 [2024-11-20 15:44:14.831816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:28.935 [2024-11-20 15:44:14.831845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:28.935 [2024-11-20 15:44:14.831872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:28.935 [2024-11-20 15:44:14.831900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:28.935 [2024-11-20 15:44:14.831927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:28.935 [2024-11-20 15:44:14.831945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:28.935 [2024-11-20 15:44:14.831954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:28.935 [2024-11-20 15:44:14.831963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:28.935 [2024-11-20 15:44:14.831972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:28.935 [2024-11-20 15:44:14.831981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:28.935 [2024-11-20 15:44:14.831990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:28.935 [2024-11-20 15:44:14.831999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:28.935 [2024-11-20 15:44:14.832008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:28.935 [2024-11-20 15:44:14.832017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.936 [2024-11-20 15:44:14.832026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:28.936 [2024-11-20 15:44:14.832035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:28.936 [2024-11-20 15:44:14.832048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.936 [2024-11-20 15:44:14.832058] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:28.936 [2024-11-20 15:44:14.832068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:28.936 [2024-11-20 15:44:14.832078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:28.936 [2024-11-20 15:44:14.832091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:28.936 [2024-11-20 15:44:14.832101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:28.936 [2024-11-20 15:44:14.832110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:28.936 [2024-11-20 15:44:14.832120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:28.936 [2024-11-20 15:44:14.832129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:28.936 [2024-11-20 15:44:14.832138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:28.936 [2024-11-20 15:44:14.832147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:28.936 [2024-11-20 15:44:14.832158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:28.936 [2024-11-20 15:44:14.832170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:28.936 [2024-11-20 15:44:14.832192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:28.936 [2024-11-20 15:44:14.832202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:28.936 [2024-11-20 15:44:14.832213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:28.936 [2024-11-20 15:44:14.832223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:28.936 [2024-11-20 15:44:14.832233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:28.936 [2024-11-20 15:44:14.832261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:28.936 [2024-11-20 15:44:14.832272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:28.936 [2024-11-20 15:44:14.832283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:28.936 [2024-11-20 15:44:14.832294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:28.936 [2024-11-20 15:44:14.832349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:28.936 [2024-11-20 15:44:14.832361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:28.936 [2024-11-20 15:44:14.832384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:28.936 [2024-11-20 15:44:14.832395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:28.936 [2024-11-20 15:44:14.832408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:28.936 [2024-11-20 15:44:14.832420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.936 [2024-11-20 15:44:14.832431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:28.936 [2024-11-20 15:44:14.832442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:32:28.936 [2024-11-20 15:44:14.832452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.936 [2024-11-20 15:44:14.875418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.936 [2024-11-20 15:44:14.875478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:28.936 [2024-11-20 15:44:14.875495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.909 ms 00:32:28.936 [2024-11-20 15:44:14.875507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.936 [2024-11-20 15:44:14.875625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.936 [2024-11-20 15:44:14.875643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:28.936 [2024-11-20 15:44:14.875655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:28.936 [2024-11-20 15:44:14.875666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.941334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.941390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:29.196 [2024-11-20 15:44:14.941411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.558 ms 00:32:29.196 [2024-11-20 15:44:14.941422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.941494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.941506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:29.196 [2024-11-20 15:44:14.941518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:29.196 [2024-11-20 15:44:14.941528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.942107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.942125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:29.196 [2024-11-20 15:44:14.942137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:32:29.196 [2024-11-20 15:44:14.942149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.942305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.942322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:29.196 [2024-11-20 15:44:14.942334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:32:29.196 [2024-11-20 15:44:14.942346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.965086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.965341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:29.196 [2024-11-20 15:44:14.965389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:32:29.196 [2024-11-20 15:44:14.965403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:14.988860] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:29.196 [2024-11-20 15:44:14.988927] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:29.196 [2024-11-20 15:44:14.988948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:14.988961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:29.196 [2024-11-20 15:44:14.988977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.365 ms 00:32:29.196 [2024-11-20 15:44:14.988989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:15.022520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:15.022633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:29.196 [2024-11-20 15:44:15.022692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.449 ms 00:32:29.196 [2024-11-20 15:44:15.022722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:15.044780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:15.044852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:29.196 [2024-11-20 15:44:15.044869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.944 ms 00:32:29.196 [2024-11-20 15:44:15.044880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:15.067911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:15.067981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:29.196 [2024-11-20 15:44:15.068000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:32:29.196 [2024-11-20 15:44:15.068011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.196 [2024-11-20 15:44:15.068953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.196 [2024-11-20 15:44:15.068981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:29.196 [2024-11-20 15:44:15.068995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:32:29.196 [2024-11-20 15:44:15.069006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.165277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.165346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:29.454 [2024-11-20 15:44:15.165365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.233 ms 00:32:29.454 [2024-11-20 15:44:15.165378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.180470] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:29.454 [2024-11-20 15:44:15.184235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.184283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:29.454 [2024-11-20 15:44:15.184301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.762 ms 00:32:29.454 [2024-11-20 15:44:15.184313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.184463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.184478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:29.454 [2024-11-20 15:44:15.184491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:29.454 [2024-11-20 15:44:15.184502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.184599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.184614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:29.454 [2024-11-20 15:44:15.184627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:29.454 [2024-11-20 15:44:15.184638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.184665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.184682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:29.454 [2024-11-20 15:44:15.184693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:29.454 [2024-11-20 15:44:15.184705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.184743] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:29.454 [2024-11-20 15:44:15.184756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.184768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:29.454 [2024-11-20 15:44:15.184779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:29.454 [2024-11-20 15:44:15.184790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.227277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.227353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:29.454 [2024-11-20 15:44:15.227374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.456 ms 00:32:29.454 [2024-11-20 15:44:15.227388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.227521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.454 [2024-11-20 15:44:15.227537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:29.454 [2024-11-20 15:44:15.227550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:29.454 [2024-11-20 15:44:15.227562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.454 [2024-11-20 15:44:15.228900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.314 ms, result 0 00:32:30.397  [2024-11-20T15:44:17.298Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-20T15:44:18.672Z] Copying: 60/1024 [MB] (26 MBps) [2024-11-20T15:44:19.607Z] Copying: 77/1024 [MB] (17 MBps) [2024-11-20T15:44:20.543Z] Copying: 109/1024 [MB] (32 MBps) [2024-11-20T15:44:21.478Z] Copying: 142/1024 [MB] (32 MBps) [2024-11-20T15:44:22.458Z] Copying: 176/1024 [MB] (33 MBps) [2024-11-20T15:44:23.397Z] Copying: 209/1024 [MB] (32 MBps) [2024-11-20T15:44:24.332Z] Copying: 241/1024 [MB] (32 MBps) [2024-11-20T15:44:25.270Z] Copying: 275/1024 [MB] (34 MBps) [2024-11-20T15:44:26.646Z] Copying: 309/1024 [MB] (33 MBps) [2024-11-20T15:44:27.580Z] Copying: 342/1024 [MB] (33 MBps) [2024-11-20T15:44:28.544Z] Copying: 374/1024 [MB] (32 MBps) [2024-11-20T15:44:29.482Z] Copying: 407/1024 [MB] (32 MBps) [2024-11-20T15:44:30.417Z] Copying: 438/1024 [MB] (30 MBps) [2024-11-20T15:44:31.354Z] Copying: 470/1024 [MB] (31 MBps) [2024-11-20T15:44:32.290Z] Copying: 502/1024 [MB] (31 MBps) [2024-11-20T15:44:33.668Z] Copying: 534/1024 [MB] (32 MBps) [2024-11-20T15:44:34.604Z] Copying: 567/1024 [MB] (33 MBps) [2024-11-20T15:44:35.540Z] Copying: 601/1024 [MB] (33 MBps) [2024-11-20T15:44:36.476Z] Copying: 631/1024 [MB] (30 MBps) [2024-11-20T15:44:37.411Z] Copying: 660/1024 [MB] (29 MBps) [2024-11-20T15:44:38.348Z] Copying: 690/1024 [MB] (29 MBps) [2024-11-20T15:44:39.282Z] Copying: 720/1024 [MB] (30 MBps) [2024-11-20T15:44:40.658Z] Copying: 750/1024 [MB] (30 MBps) [2024-11-20T15:44:41.592Z] Copying: 781/1024 [MB] (30 MBps) [2024-11-20T15:44:42.529Z] Copying: 811/1024 [MB] (30 MBps) [2024-11-20T15:44:43.542Z] Copying: 842/1024 [MB] (30 MBps) [2024-11-20T15:44:44.479Z] Copying: 873/1024 [MB] (30 MBps) [2024-11-20T15:44:45.416Z] Copying: 907/1024 [MB] (34 MBps) [2024-11-20T15:44:46.356Z] Copying: 942/1024 [MB] (34 MBps) [2024-11-20T15:44:47.296Z] Copying: 975/1024 [MB] (33 MBps) [2024-11-20T15:44:48.714Z] Copying: 1008/1024 [MB] (33 MBps) [2024-11-20T15:44:48.714Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-20T15:44:48.714Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 15:44:48.681249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.756 [2024-11-20 15:44:48.681556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:02.756 [2024-11-20 15:44:48.681606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:02.756 [2024-11-20 15:44:48.681620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.756 [2024-11-20 15:44:48.683879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:03.015 [2024-11-20 15:44:48.691870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.692055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:03.015 [2024-11-20 15:44:48.692146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:33:03.015 [2024-11-20 15:44:48.692231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.703330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.703526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:03.015 [2024-11-20 15:44:48.703634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.265 ms 00:33:03.015 [2024-11-20 15:44:48.703677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.725202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.725423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:03.015 [2024-11-20 15:44:48.725452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.419 ms 00:33:03.015 [2024-11-20 15:44:48.725465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.731384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.731435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:03.015 [2024-11-20 15:44:48.731451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.870 ms 00:33:03.015 [2024-11-20 15:44:48.731462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.773751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.773816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:03.015 [2024-11-20 15:44:48.773833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.199 ms 00:33:03.015 [2024-11-20 15:44:48.773843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.797305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.797368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:03.015 [2024-11-20 15:44:48.797387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:33:03.015 [2024-11-20 15:44:48.797399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.882982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.883213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:03.015 [2024-11-20 15:44:48.883257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.509 ms 00:33:03.015 [2024-11-20 15:44:48.883270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.925672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.925908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:03.015 [2024-11-20 15:44:48.925936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.367 ms 00:33:03.015 [2024-11-20 15:44:48.925948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.015 [2024-11-20 15:44:48.968386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.015 [2024-11-20 15:44:48.968588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:03.016 [2024-11-20 15:44:48.968614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.374 ms 00:33:03.016 [2024-11-20 15:44:48.968628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.276 [2024-11-20 15:44:49.010051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.276 [2024-11-20 15:44:49.010290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:03.276 [2024-11-20 15:44:49.010318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.362 ms 00:33:03.276 [2024-11-20 15:44:49.010331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.276 [2024-11-20 15:44:49.051394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.276 [2024-11-20 15:44:49.051460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:03.276 [2024-11-20 15:44:49.051479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.937 ms 00:33:03.276 [2024-11-20 15:44:49.051491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.276 [2024-11-20 15:44:49.051547] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:03.276 [2024-11-20 15:44:49.051583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122112 / 261120 wr_cnt: 1 state: open 00:33:03.276 [2024-11-20 15:44:49.051599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.051991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:03.276 [2024-11-20 15:44:49.052325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:03.277 [2024-11-20 15:44:49.052800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:03.277 [2024-11-20 15:44:49.052811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae539e5c-0d15-4f4f-a98d-b97d05826ce0 00:33:03.277 [2024-11-20 15:44:49.052823] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122112 00:33:03.277 [2024-11-20 15:44:49.052843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123072 00:33:03.277 [2024-11-20 15:44:49.052867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122112 00:33:03.277 [2024-11-20 15:44:49.052879] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:33:03.277 [2024-11-20 15:44:49.052889] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:03.277 [2024-11-20 15:44:49.052901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:03.277 [2024-11-20 15:44:49.052912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:03.277 [2024-11-20 15:44:49.052922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:03.277 [2024-11-20 15:44:49.052931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:03.277 [2024-11-20 15:44:49.052943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.277 [2024-11-20 15:44:49.052954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:03.277 [2024-11-20 15:44:49.052976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:33:03.277 [2024-11-20 15:44:49.052987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.075180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.277 [2024-11-20 15:44:49.075240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:03.277 [2024-11-20 15:44:49.075256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.142 ms 00:33:03.277 [2024-11-20 15:44:49.075268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.076063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.277 [2024-11-20 15:44:49.076171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:03.277 [2024-11-20 15:44:49.076250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:33:03.277 [2024-11-20 15:44:49.076311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.132539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.277 [2024-11-20 15:44:49.132814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:03.277 [2024-11-20 15:44:49.132841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.277 [2024-11-20 15:44:49.132854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.132944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.277 [2024-11-20 15:44:49.132957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:03.277 [2024-11-20 15:44:49.132969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.277 [2024-11-20 15:44:49.132985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.133107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.277 [2024-11-20 15:44:49.133123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:03.277 [2024-11-20 15:44:49.133135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.277 [2024-11-20 15:44:49.133146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.277 [2024-11-20 15:44:49.133166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.277 [2024-11-20 15:44:49.133178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:03.277 [2024-11-20 15:44:49.133189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.277 [2024-11-20 15:44:49.133200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.270791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.271063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:03.536 [2024-11-20 15:44:49.271179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.271220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.380159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.380441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:03.536 [2024-11-20 15:44:49.380467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.380480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.380725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.380824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:03.536 [2024-11-20 15:44:49.380844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.380856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.380922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.380935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:03.536 [2024-11-20 15:44:49.380947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.380958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.381083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.381099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:03.536 [2024-11-20 15:44:49.381111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.381122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.381162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.381177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:03.536 [2024-11-20 15:44:49.381188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.381199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.381238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.381256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:03.536 [2024-11-20 15:44:49.381267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.381279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.381325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.536 [2024-11-20 15:44:49.381339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:03.536 [2024-11-20 15:44:49.381350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.536 [2024-11-20 15:44:49.381362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.536 [2024-11-20 15:44:49.381489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 701.056 ms, result 0 00:33:06.066 00:33:06.066 00:33:06.066 15:44:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:07.964 15:44:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:07.964 [2024-11-20 15:44:53.559642] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:33:07.964 [2024-11-20 15:44:53.559789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82241 ] 00:33:07.964 [2024-11-20 15:44:53.745339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.964 [2024-11-20 15:44:53.920462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.530 [2024-11-20 15:44:54.304455] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:08.530 [2024-11-20 15:44:54.304529] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:08.530 [2024-11-20 15:44:54.469532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.530 [2024-11-20 15:44:54.469616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:08.530 [2024-11-20 15:44:54.469638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:08.530 [2024-11-20 15:44:54.469648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.530 [2024-11-20 15:44:54.469713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.530 [2024-11-20 15:44:54.469725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.530 [2024-11-20 15:44:54.469740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:08.530 [2024-11-20 15:44:54.469751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.530 [2024-11-20 15:44:54.469774] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:08.530 [2024-11-20 15:44:54.470925] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:08.530 [2024-11-20 15:44:54.471129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.530 [2024-11-20 15:44:54.471147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.530 [2024-11-20 15:44:54.471162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:33:08.530 [2024-11-20 15:44:54.471173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.530 [2024-11-20 15:44:54.472754] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:08.789 [2024-11-20 15:44:54.494764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.494834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:08.789 [2024-11-20 15:44:54.494853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.008 ms 00:33:08.789 [2024-11-20 15:44:54.494865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.494974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.494990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:08.789 [2024-11-20 15:44:54.495003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:08.789 [2024-11-20 15:44:54.495015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.504795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.505161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.789 [2024-11-20 15:44:54.505201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.662 ms 00:33:08.789 [2024-11-20 15:44:54.505232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.505391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.505415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.789 [2024-11-20 15:44:54.505433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:33:08.789 [2024-11-20 15:44:54.505449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.505626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.505669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:08.789 [2024-11-20 15:44:54.505690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:08.789 [2024-11-20 15:44:54.505707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.505766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:08.789 [2024-11-20 15:44:54.511770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.512047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.789 [2024-11-20 15:44:54.512087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:33:08.789 [2024-11-20 15:44:54.512115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.512196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.789 [2024-11-20 15:44:54.512219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:08.789 [2024-11-20 15:44:54.512241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:08.789 [2024-11-20 15:44:54.512259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.789 [2024-11-20 15:44:54.512332] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:08.789 [2024-11-20 15:44:54.512368] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:08.789 [2024-11-20 15:44:54.512422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:08.789 [2024-11-20 15:44:54.512455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:08.789 [2024-11-20 15:44:54.512598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:08.789 [2024-11-20 15:44:54.512627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:08.789 [2024-11-20 15:44:54.512651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:08.789 [2024-11-20 15:44:54.512675] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:08.790 [2024-11-20 15:44:54.512698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:08.790 [2024-11-20 15:44:54.512717] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:08.790 [2024-11-20 15:44:54.512734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:08.790 [2024-11-20 15:44:54.512749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:08.790 [2024-11-20 15:44:54.512775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:08.790 [2024-11-20 15:44:54.512793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.790 [2024-11-20 15:44:54.512810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:08.790 [2024-11-20 15:44:54.512827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:33:08.790 [2024-11-20 15:44:54.512843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.790 [2024-11-20 15:44:54.512961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.790 [2024-11-20 15:44:54.512990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:08.790 [2024-11-20 15:44:54.513009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:33:08.790 [2024-11-20 15:44:54.513026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.790 [2024-11-20 15:44:54.513161] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:08.790 [2024-11-20 15:44:54.513186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:08.790 [2024-11-20 15:44:54.513205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:08.790 [2024-11-20 15:44:54.513255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:08.790 [2024-11-20 15:44:54.513302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.790 [2024-11-20 15:44:54.513334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:08.790 [2024-11-20 15:44:54.513350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:08.790 [2024-11-20 15:44:54.513365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.790 [2024-11-20 15:44:54.513381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:08.790 [2024-11-20 15:44:54.513400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:08.790 [2024-11-20 15:44:54.513434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:08.790 [2024-11-20 15:44:54.513474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:08.790 [2024-11-20 15:44:54.513541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:08.790 [2024-11-20 15:44:54.513609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:08.790 [2024-11-20 15:44:54.513660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:08.790 [2024-11-20 15:44:54.513708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.790 [2024-11-20 15:44:54.513742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:08.790 [2024-11-20 15:44:54.513761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.790 [2024-11-20 15:44:54.513801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:08.790 [2024-11-20 15:44:54.513819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:08.790 [2024-11-20 15:44:54.513837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.790 [2024-11-20 15:44:54.513854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:08.790 [2024-11-20 15:44:54.513870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:08.790 [2024-11-20 15:44:54.513886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:08.790 [2024-11-20 15:44:54.513919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:08.790 [2024-11-20 15:44:54.513937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.513953] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:08.790 [2024-11-20 15:44:54.513972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:08.790 [2024-11-20 15:44:54.513994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.790 [2024-11-20 15:44:54.514016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.790 [2024-11-20 15:44:54.514033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:08.790 [2024-11-20 15:44:54.514049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:08.790 [2024-11-20 15:44:54.514069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:08.790 [2024-11-20 15:44:54.514085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:08.790 [2024-11-20 15:44:54.514100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:08.790 [2024-11-20 15:44:54.514117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:08.790 [2024-11-20 15:44:54.514136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:08.790 [2024-11-20 15:44:54.514158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:08.790 [2024-11-20 15:44:54.514195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:08.790 [2024-11-20 15:44:54.514213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:08.790 [2024-11-20 15:44:54.514230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:08.790 [2024-11-20 15:44:54.514247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:08.790 [2024-11-20 15:44:54.514265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:08.790 [2024-11-20 15:44:54.514282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:08.790 [2024-11-20 15:44:54.514299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:08.790 [2024-11-20 15:44:54.514317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:08.790 [2024-11-20 15:44:54.514334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:08.790 [2024-11-20 15:44:54.514426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:08.790 [2024-11-20 15:44:54.514455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:08.790 [2024-11-20 15:44:54.514498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:08.790 [2024-11-20 15:44:54.514517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:08.790 [2024-11-20 15:44:54.514537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:08.790 [2024-11-20 15:44:54.514559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.790 [2024-11-20 15:44:54.514596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:08.791 [2024-11-20 15:44:54.514632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:33:08.791 [2024-11-20 15:44:54.514652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.563628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.563711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:08.791 [2024-11-20 15:44:54.563737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.881 ms 00:33:08.791 [2024-11-20 15:44:54.563751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.563882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.563901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:08.791 [2024-11-20 15:44:54.563918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:08.791 [2024-11-20 15:44:54.563933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.625446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.625508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:08.791 [2024-11-20 15:44:54.625524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.373 ms 00:33:08.791 [2024-11-20 15:44:54.625552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.625640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.625655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:08.791 [2024-11-20 15:44:54.625673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:08.791 [2024-11-20 15:44:54.625684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.626222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.626240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:08.791 [2024-11-20 15:44:54.626252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:33:08.791 [2024-11-20 15:44:54.626263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.626395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.626415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:08.791 [2024-11-20 15:44:54.626428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:33:08.791 [2024-11-20 15:44:54.626446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.647020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.647083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:08.791 [2024-11-20 15:44:54.647105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.547 ms 00:33:08.791 [2024-11-20 15:44:54.647117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.668959] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:08.791 [2024-11-20 15:44:54.669307] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:08.791 [2024-11-20 15:44:54.669351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.669372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:08.791 [2024-11-20 15:44:54.669397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.068 ms 00:33:08.791 [2024-11-20 15:44:54.669419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.705379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.705469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:08.791 [2024-11-20 15:44:54.705487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.822 ms 00:33:08.791 [2024-11-20 15:44:54.705516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.791 [2024-11-20 15:44:54.726381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.791 [2024-11-20 15:44:54.726484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:08.791 [2024-11-20 15:44:54.726501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.741 ms 00:33:08.791 [2024-11-20 15:44:54.726511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.748498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.748828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:09.050 [2024-11-20 15:44:54.748859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.862 ms 00:33:09.050 [2024-11-20 15:44:54.748872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.749929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.749966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:09.050 [2024-11-20 15:44:54.749981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:33:09.050 [2024-11-20 15:44:54.749998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.849790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.849864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:09.050 [2024-11-20 15:44:54.849889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.752 ms 00:33:09.050 [2024-11-20 15:44:54.849900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.865132] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:09.050 [2024-11-20 15:44:54.868694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.868939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:09.050 [2024-11-20 15:44:54.868987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.689 ms 00:33:09.050 [2024-11-20 15:44:54.868999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.869138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.869154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:09.050 [2024-11-20 15:44:54.869167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:09.050 [2024-11-20 15:44:54.869183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.870944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.871000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:09.050 [2024-11-20 15:44:54.871015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:33:09.050 [2024-11-20 15:44:54.871026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.871072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.871085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:09.050 [2024-11-20 15:44:54.871097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:09.050 [2024-11-20 15:44:54.871108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.871153] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:09.050 [2024-11-20 15:44:54.871167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.871179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:09.050 [2024-11-20 15:44:54.871190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:09.050 [2024-11-20 15:44:54.871201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.911464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.911540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:09.050 [2024-11-20 15:44:54.911558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.235 ms 00:33:09.050 [2024-11-20 15:44:54.911597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.911733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.050 [2024-11-20 15:44:54.911764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:09.050 [2024-11-20 15:44:54.911776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:09.050 [2024-11-20 15:44:54.911788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.050 [2024-11-20 15:44:54.915410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.424 ms, result 0 00:33:10.435  [2024-11-20T15:44:57.327Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-11-20T15:44:58.262Z] Copying: 5576/1048576 [kB] (4568 kBps) [2024-11-20T15:44:59.195Z] Copying: 40/1024 [MB] (34 MBps) [2024-11-20T15:45:00.315Z] Copying: 78/1024 [MB] (37 MBps) [2024-11-20T15:45:01.249Z] Copying: 116/1024 [MB] (37 MBps) [2024-11-20T15:45:02.185Z] Copying: 155/1024 [MB] (39 MBps) [2024-11-20T15:45:03.557Z] Copying: 195/1024 [MB] (39 MBps) [2024-11-20T15:45:04.490Z] Copying: 234/1024 [MB] (38 MBps) [2024-11-20T15:45:05.433Z] Copying: 268/1024 [MB] (34 MBps) [2024-11-20T15:45:06.374Z] Copying: 303/1024 [MB] (34 MBps) [2024-11-20T15:45:07.309Z] Copying: 339/1024 [MB] (36 MBps) [2024-11-20T15:45:08.246Z] Copying: 379/1024 [MB] (39 MBps) [2024-11-20T15:45:09.181Z] Copying: 418/1024 [MB] (39 MBps) [2024-11-20T15:45:10.566Z] Copying: 458/1024 [MB] (39 MBps) [2024-11-20T15:45:11.501Z] Copying: 498/1024 [MB] (40 MBps) [2024-11-20T15:45:12.436Z] Copying: 531/1024 [MB] (33 MBps) [2024-11-20T15:45:13.372Z] Copying: 564/1024 [MB] (33 MBps) [2024-11-20T15:45:14.307Z] Copying: 597/1024 [MB] (33 MBps) [2024-11-20T15:45:15.242Z] Copying: 632/1024 [MB] (34 MBps) [2024-11-20T15:45:16.176Z] Copying: 665/1024 [MB] (33 MBps) [2024-11-20T15:45:17.581Z] Copying: 702/1024 [MB] (37 MBps) [2024-11-20T15:45:18.515Z] Copying: 737/1024 [MB] (34 MBps) [2024-11-20T15:45:19.449Z] Copying: 773/1024 [MB] (35 MBps) [2024-11-20T15:45:20.380Z] Copying: 806/1024 [MB] (33 MBps) [2024-11-20T15:45:21.311Z] Copying: 839/1024 [MB] (33 MBps) [2024-11-20T15:45:22.244Z] Copying: 873/1024 [MB] (33 MBps) [2024-11-20T15:45:23.178Z] Copying: 907/1024 [MB] (33 MBps) [2024-11-20T15:45:24.552Z] Copying: 940/1024 [MB] (33 MBps) [2024-11-20T15:45:25.485Z] Copying: 976/1024 [MB] (36 MBps) [2024-11-20T15:45:25.485Z] Copying: 1014/1024 [MB] (37 MBps) [2024-11-20T15:45:26.418Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 15:45:26.178461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.460 [2024-11-20 15:45:26.178845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:40.460 [2024-11-20 15:45:26.178885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:40.460 [2024-11-20 15:45:26.178904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.460 [2024-11-20 15:45:26.178962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:40.460 [2024-11-20 15:45:26.186708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.460 [2024-11-20 15:45:26.187007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:40.460 [2024-11-20 15:45:26.187043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.715 ms 00:33:40.460 [2024-11-20 15:45:26.187063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.460 [2024-11-20 15:45:26.187417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.460 [2024-11-20 15:45:26.187440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:40.460 [2024-11-20 15:45:26.187465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:33:40.460 [2024-11-20 15:45:26.187482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.460 [2024-11-20 15:45:26.202986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.460 [2024-11-20 15:45:26.203080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:40.461 [2024-11-20 15:45:26.203107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.470 ms 00:33:40.461 [2024-11-20 15:45:26.203124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.461 [2024-11-20 15:45:26.211634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.461 [2024-11-20 15:45:26.211717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:40.461 [2024-11-20 15:45:26.211755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.451 ms 00:33:40.461 [2024-11-20 15:45:26.211771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.461 [2024-11-20 15:45:26.273781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.461 [2024-11-20 15:45:26.274258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:40.461 [2024-11-20 15:45:26.274320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.917 ms 00:33:40.461 [2024-11-20 15:45:26.274349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.461 [2024-11-20 15:45:26.310378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.461 [2024-11-20 15:45:26.310506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:40.461 [2024-11-20 15:45:26.310544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.900 ms 00:33:40.461 [2024-11-20 15:45:26.310596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.461 [2024-11-20 15:45:26.313210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.461 [2024-11-20 15:45:26.313529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:40.461 [2024-11-20 15:45:26.313599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.443 ms 00:33:40.461 [2024-11-20 15:45:26.313627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.461 [2024-11-20 15:45:26.375220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.461 [2024-11-20 15:45:26.375348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:40.461 [2024-11-20 15:45:26.375383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.508 ms 00:33:40.461 [2024-11-20 15:45:26.375405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.719 [2024-11-20 15:45:26.439154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.719 [2024-11-20 15:45:26.439525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:40.719 [2024-11-20 15:45:26.439608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.620 ms 00:33:40.719 [2024-11-20 15:45:26.439625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.719 [2024-11-20 15:45:26.502549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.719 [2024-11-20 15:45:26.502666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:40.719 [2024-11-20 15:45:26.502692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.812 ms 00:33:40.719 [2024-11-20 15:45:26.502708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.719 [2024-11-20 15:45:26.565030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.719 [2024-11-20 15:45:26.565117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:40.719 [2024-11-20 15:45:26.565143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.107 ms 00:33:40.719 [2024-11-20 15:45:26.565159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.719 [2024-11-20 15:45:26.565253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:40.719 [2024-11-20 15:45:26.565279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:40.719 [2024-11-20 15:45:26.565301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:33:40.719 [2024-11-20 15:45:26.565319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.565993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:40.720 [2024-11-20 15:45:26.566879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.566998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.567014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:40.721 [2024-11-20 15:45:26.567043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:40.721 [2024-11-20 15:45:26.567058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae539e5c-0d15-4f4f-a98d-b97d05826ce0 00:33:40.721 [2024-11-20 15:45:26.567076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:33:40.721 [2024-11-20 15:45:26.567091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 142528 00:33:40.721 [2024-11-20 15:45:26.567106] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 140544 00:33:40.721 [2024-11-20 15:45:26.567129] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0141 00:33:40.721 [2024-11-20 15:45:26.567144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:40.721 [2024-11-20 15:45:26.567160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:40.721 [2024-11-20 15:45:26.567176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:40.721 [2024-11-20 15:45:26.567207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:40.721 [2024-11-20 15:45:26.567221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:40.721 [2024-11-20 15:45:26.567237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.721 [2024-11-20 15:45:26.567254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:40.721 [2024-11-20 15:45:26.567270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:33:40.721 [2024-11-20 15:45:26.567286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.591931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.721 [2024-11-20 15:45:26.592001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:40.721 [2024-11-20 15:45:26.592017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.566 ms 00:33:40.721 [2024-11-20 15:45:26.592029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.592645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.721 [2024-11-20 15:45:26.592727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:40.721 [2024-11-20 15:45:26.592743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:33:40.721 [2024-11-20 15:45:26.592755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.655421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.721 [2024-11-20 15:45:26.655505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:40.721 [2024-11-20 15:45:26.655523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.721 [2024-11-20 15:45:26.655534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.655635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.721 [2024-11-20 15:45:26.655650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:40.721 [2024-11-20 15:45:26.655679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.721 [2024-11-20 15:45:26.655704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.655840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.721 [2024-11-20 15:45:26.655857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:40.721 [2024-11-20 15:45:26.655869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.721 [2024-11-20 15:45:26.655882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.721 [2024-11-20 15:45:26.655903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.721 [2024-11-20 15:45:26.655915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:40.721 [2024-11-20 15:45:26.655926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.721 [2024-11-20 15:45:26.655937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.797961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.798213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:40.978 [2024-11-20 15:45:26.798259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.798272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.910617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.910725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:40.978 [2024-11-20 15:45:26.910742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.910754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.910859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.910885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:40.978 [2024-11-20 15:45:26.910897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.910909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.910959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.910972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:40.978 [2024-11-20 15:45:26.910983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.910994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.911121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.911137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:40.978 [2024-11-20 15:45:26.911153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.911164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.911203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.911216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:40.978 [2024-11-20 15:45:26.911228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.911239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.911280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.911293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:40.978 [2024-11-20 15:45:26.911304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.911320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.911365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:40.978 [2024-11-20 15:45:26.911379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:40.978 [2024-11-20 15:45:26.911391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:40.978 [2024-11-20 15:45:26.911402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.978 [2024-11-20 15:45:26.911531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 733.038 ms, result 0 00:33:42.354 00:33:42.354 00:33:42.354 15:45:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:44.252 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:44.252 15:45:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:44.252 [2024-11-20 15:45:30.112683] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:33:44.252 [2024-11-20 15:45:30.112870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82598 ] 00:33:44.511 [2024-11-20 15:45:30.316266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.768 [2024-11-20 15:45:30.485878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.027 [2024-11-20 15:45:30.917945] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.027 [2024-11-20 15:45:30.918032] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.312 [2024-11-20 15:45:31.085398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.085485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:45.312 [2024-11-20 15:45:31.085509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:45.312 [2024-11-20 15:45:31.085522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.085621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.085637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:45.312 [2024-11-20 15:45:31.085654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:45.312 [2024-11-20 15:45:31.085666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.085711] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:45.312 [2024-11-20 15:45:31.086983] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:45.312 [2024-11-20 15:45:31.087026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.087041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:45.312 [2024-11-20 15:45:31.087055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:33:45.312 [2024-11-20 15:45:31.087068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.088766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:45.312 [2024-11-20 15:45:31.114024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.114098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:45.312 [2024-11-20 15:45:31.114119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.255 ms 00:33:45.312 [2024-11-20 15:45:31.114131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.114283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.114305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:45.312 [2024-11-20 15:45:31.114320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:45.312 [2024-11-20 15:45:31.114332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.122392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.122450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:45.312 [2024-11-20 15:45:31.122468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.918 ms 00:33:45.312 [2024-11-20 15:45:31.122506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.122637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.122657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:45.312 [2024-11-20 15:45:31.122671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:45.312 [2024-11-20 15:45:31.122683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.122749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.122764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:45.312 [2024-11-20 15:45:31.122777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:45.312 [2024-11-20 15:45:31.122789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.122827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:45.312 [2024-11-20 15:45:31.128704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.128778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:45.312 [2024-11-20 15:45:31.128795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:33:45.312 [2024-11-20 15:45:31.128813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.128862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.128875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:45.312 [2024-11-20 15:45:31.128887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:45.312 [2024-11-20 15:45:31.128899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.128983] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:45.312 [2024-11-20 15:45:31.129010] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:45.312 [2024-11-20 15:45:31.129053] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:45.312 [2024-11-20 15:45:31.129077] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:45.312 [2024-11-20 15:45:31.129185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:45.312 [2024-11-20 15:45:31.129200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:45.312 [2024-11-20 15:45:31.129216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:45.312 [2024-11-20 15:45:31.129231] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129258] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:45.312 [2024-11-20 15:45:31.129271] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:45.312 [2024-11-20 15:45:31.129283] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:45.312 [2024-11-20 15:45:31.129299] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:45.312 [2024-11-20 15:45:31.129310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.129323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:45.312 [2024-11-20 15:45:31.129335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:33:45.312 [2024-11-20 15:45:31.129346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.129447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.312 [2024-11-20 15:45:31.129460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:45.312 [2024-11-20 15:45:31.129472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:33:45.312 [2024-11-20 15:45:31.129498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.312 [2024-11-20 15:45:31.129658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:45.312 [2024-11-20 15:45:31.129677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:45.312 [2024-11-20 15:45:31.129691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:45.312 [2024-11-20 15:45:31.129737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:45.312 [2024-11-20 15:45:31.129780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.312 [2024-11-20 15:45:31.129803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:45.312 [2024-11-20 15:45:31.129815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:45.312 [2024-11-20 15:45:31.129826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.312 [2024-11-20 15:45:31.129838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:45.312 [2024-11-20 15:45:31.129850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:45.312 [2024-11-20 15:45:31.129873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:45.312 [2024-11-20 15:45:31.129895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:45.312 [2024-11-20 15:45:31.129928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:45.312 [2024-11-20 15:45:31.129940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.312 [2024-11-20 15:45:31.129951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:45.313 [2024-11-20 15:45:31.129963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:45.313 [2024-11-20 15:45:31.129973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.313 [2024-11-20 15:45:31.129984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:45.313 [2024-11-20 15:45:31.129995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.313 [2024-11-20 15:45:31.130016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:45.313 [2024-11-20 15:45:31.130027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.313 [2024-11-20 15:45:31.130048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:45.313 [2024-11-20 15:45:31.130059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.313 [2024-11-20 15:45:31.130081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:45.313 [2024-11-20 15:45:31.130091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:45.313 [2024-11-20 15:45:31.130102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.313 [2024-11-20 15:45:31.130113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:45.313 [2024-11-20 15:45:31.130123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:45.313 [2024-11-20 15:45:31.130135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:45.313 [2024-11-20 15:45:31.130156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:45.313 [2024-11-20 15:45:31.130167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:45.313 [2024-11-20 15:45:31.130189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:45.313 [2024-11-20 15:45:31.130202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.313 [2024-11-20 15:45:31.130214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.313 [2024-11-20 15:45:31.130226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:45.313 [2024-11-20 15:45:31.130237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:45.313 [2024-11-20 15:45:31.130249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:45.313 [2024-11-20 15:45:31.130260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:45.313 [2024-11-20 15:45:31.130271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:45.313 [2024-11-20 15:45:31.130282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:45.313 [2024-11-20 15:45:31.130295] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:45.313 [2024-11-20 15:45:31.130311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:45.313 [2024-11-20 15:45:31.130337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:45.313 [2024-11-20 15:45:31.130350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:45.313 [2024-11-20 15:45:31.130364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:45.313 [2024-11-20 15:45:31.130376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:45.313 [2024-11-20 15:45:31.130388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:45.313 [2024-11-20 15:45:31.130401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:45.313 [2024-11-20 15:45:31.130413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:45.313 [2024-11-20 15:45:31.130426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:45.313 [2024-11-20 15:45:31.130439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:45.313 [2024-11-20 15:45:31.130499] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:45.313 [2024-11-20 15:45:31.130517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:45.313 [2024-11-20 15:45:31.130542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:45.313 [2024-11-20 15:45:31.130554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:45.313 [2024-11-20 15:45:31.130577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:45.313 [2024-11-20 15:45:31.130592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.130604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:45.313 [2024-11-20 15:45:31.130618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:33:45.313 [2024-11-20 15:45:31.130640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.177048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.177112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:45.313 [2024-11-20 15:45:31.177132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.342 ms 00:33:45.313 [2024-11-20 15:45:31.177145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.177268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.177282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:45.313 [2024-11-20 15:45:31.177296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:33:45.313 [2024-11-20 15:45:31.177307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.242792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.243080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:45.313 [2024-11-20 15:45:31.243110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.394 ms 00:33:45.313 [2024-11-20 15:45:31.243123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.243226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.243241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:45.313 [2024-11-20 15:45:31.243261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:45.313 [2024-11-20 15:45:31.243273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.243874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.243893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:45.313 [2024-11-20 15:45:31.243907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:33:45.313 [2024-11-20 15:45:31.243918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.313 [2024-11-20 15:45:31.244069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.313 [2024-11-20 15:45:31.244086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:45.313 [2024-11-20 15:45:31.244099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:33:45.313 [2024-11-20 15:45:31.244119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.314 [2024-11-20 15:45:31.267089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.314 [2024-11-20 15:45:31.267173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:45.314 [2024-11-20 15:45:31.267211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.937 ms 00:33:45.314 [2024-11-20 15:45:31.267232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.291944] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:45.572 [2024-11-20 15:45:31.292009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:45.572 [2024-11-20 15:45:31.292035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.292052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:45.572 [2024-11-20 15:45:31.292071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.569 ms 00:33:45.572 [2024-11-20 15:45:31.292086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.330271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.330363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:45.572 [2024-11-20 15:45:31.330383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.086 ms 00:33:45.572 [2024-11-20 15:45:31.330396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.354275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.354600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:45.572 [2024-11-20 15:45:31.354641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.776 ms 00:33:45.572 [2024-11-20 15:45:31.354653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.377691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.377985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:45.572 [2024-11-20 15:45:31.378016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.959 ms 00:33:45.572 [2024-11-20 15:45:31.378029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.379095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.379135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:45.572 [2024-11-20 15:45:31.379151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:33:45.572 [2024-11-20 15:45:31.379168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.495077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.495394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:45.572 [2024-11-20 15:45:31.495436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.870 ms 00:33:45.572 [2024-11-20 15:45:31.495449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.511376] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:45.572 [2024-11-20 15:45:31.515026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.515082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:45.572 [2024-11-20 15:45:31.515100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.495 ms 00:33:45.572 [2024-11-20 15:45:31.515113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.515247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.515264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:45.572 [2024-11-20 15:45:31.515278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:45.572 [2024-11-20 15:45:31.515295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.516251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.516279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:45.572 [2024-11-20 15:45:31.516293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:33:45.572 [2024-11-20 15:45:31.516305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.516338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.516352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:45.572 [2024-11-20 15:45:31.516364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:45.572 [2024-11-20 15:45:31.516375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.572 [2024-11-20 15:45:31.516421] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:45.572 [2024-11-20 15:45:31.516436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.572 [2024-11-20 15:45:31.516448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:45.572 [2024-11-20 15:45:31.516460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:45.572 [2024-11-20 15:45:31.516471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.830 [2024-11-20 15:45:31.561475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.830 [2024-11-20 15:45:31.561557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:45.830 [2024-11-20 15:45:31.561595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.978 ms 00:33:45.830 [2024-11-20 15:45:31.561619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.830 [2024-11-20 15:45:31.561768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.830 [2024-11-20 15:45:31.561784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:45.830 [2024-11-20 15:45:31.561797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:45.830 [2024-11-20 15:45:31.561809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.830 [2024-11-20 15:45:31.563225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 477.294 ms, result 0 00:33:47.204  [2024-11-20T15:45:34.095Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-20T15:45:35.029Z] Copying: 65/1024 [MB] (31 MBps) [2024-11-20T15:45:35.962Z] Copying: 99/1024 [MB] (34 MBps) [2024-11-20T15:45:36.896Z] Copying: 134/1024 [MB] (34 MBps) [2024-11-20T15:45:37.830Z] Copying: 167/1024 [MB] (32 MBps) [2024-11-20T15:45:39.202Z] Copying: 198/1024 [MB] (30 MBps) [2024-11-20T15:45:40.136Z] Copying: 233/1024 [MB] (34 MBps) [2024-11-20T15:45:41.088Z] Copying: 267/1024 [MB] (33 MBps) [2024-11-20T15:45:42.022Z] Copying: 301/1024 [MB] (34 MBps) [2024-11-20T15:45:42.958Z] Copying: 331/1024 [MB] (30 MBps) [2024-11-20T15:45:43.893Z] Copying: 365/1024 [MB] (33 MBps) [2024-11-20T15:45:45.269Z] Copying: 400/1024 [MB] (34 MBps) [2024-11-20T15:45:45.836Z] Copying: 434/1024 [MB] (34 MBps) [2024-11-20T15:45:47.211Z] Copying: 468/1024 [MB] (34 MBps) [2024-11-20T15:45:48.146Z] Copying: 502/1024 [MB] (34 MBps) [2024-11-20T15:45:49.080Z] Copying: 536/1024 [MB] (34 MBps) [2024-11-20T15:45:50.012Z] Copying: 569/1024 [MB] (32 MBps) [2024-11-20T15:45:50.946Z] Copying: 600/1024 [MB] (30 MBps) [2024-11-20T15:45:51.878Z] Copying: 634/1024 [MB] (34 MBps) [2024-11-20T15:45:53.253Z] Copying: 666/1024 [MB] (32 MBps) [2024-11-20T15:45:54.189Z] Copying: 698/1024 [MB] (32 MBps) [2024-11-20T15:45:55.124Z] Copying: 729/1024 [MB] (30 MBps) [2024-11-20T15:45:56.058Z] Copying: 761/1024 [MB] (32 MBps) [2024-11-20T15:45:56.993Z] Copying: 793/1024 [MB] (31 MBps) [2024-11-20T15:45:57.925Z] Copying: 827/1024 [MB] (33 MBps) [2024-11-20T15:45:58.857Z] Copying: 856/1024 [MB] (28 MBps) [2024-11-20T15:46:00.228Z] Copying: 887/1024 [MB] (31 MBps) [2024-11-20T15:46:01.167Z] Copying: 918/1024 [MB] (31 MBps) [2024-11-20T15:46:02.098Z] Copying: 950/1024 [MB] (31 MBps) [2024-11-20T15:46:03.030Z] Copying: 980/1024 [MB] (30 MBps) [2024-11-20T15:46:03.287Z] Copying: 1014/1024 [MB] (34 MBps) [2024-11-20T15:46:03.855Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 15:46:03.806513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.897 [2024-11-20 15:46:03.806629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:17.897 [2024-11-20 15:46:03.806674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:17.897 [2024-11-20 15:46:03.806698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.897 [2024-11-20 15:46:03.806746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:17.897 [2024-11-20 15:46:03.813451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.897 [2024-11-20 15:46:03.813525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:17.897 [2024-11-20 15:46:03.813583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.665 ms 00:34:17.897 [2024-11-20 15:46:03.813609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.897 [2024-11-20 15:46:03.813985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.897 [2024-11-20 15:46:03.814027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:17.898 [2024-11-20 15:46:03.814052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:34:17.898 [2024-11-20 15:46:03.814075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.898 [2024-11-20 15:46:03.819299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.898 [2024-11-20 15:46:03.819351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:17.898 [2024-11-20 15:46:03.819377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.191 ms 00:34:17.898 [2024-11-20 15:46:03.819401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:17.898 [2024-11-20 15:46:03.830530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:17.898 [2024-11-20 15:46:03.830621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:17.898 [2024-11-20 15:46:03.830657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.076 ms 00:34:17.898 [2024-11-20 15:46:03.830676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.156 [2024-11-20 15:46:03.893900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.156 [2024-11-20 15:46:03.894004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:18.156 [2024-11-20 15:46:03.894030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.063 ms 00:34:18.156 [2024-11-20 15:46:03.894046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.156 [2024-11-20 15:46:03.926872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.156 [2024-11-20 15:46:03.926956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:18.156 [2024-11-20 15:46:03.926981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.763 ms 00:34:18.156 [2024-11-20 15:46:03.926998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.156 [2024-11-20 15:46:03.928914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.156 [2024-11-20 15:46:03.928984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:18.156 [2024-11-20 15:46:03.929005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:34:18.156 [2024-11-20 15:46:03.929021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.156 [2024-11-20 15:46:03.991238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.156 [2024-11-20 15:46:03.991340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:18.156 [2024-11-20 15:46:03.991365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.185 ms 00:34:18.156 [2024-11-20 15:46:03.991383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.156 [2024-11-20 15:46:04.050883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.156 [2024-11-20 15:46:04.050967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:18.157 [2024-11-20 15:46:04.050984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.441 ms 00:34:18.157 [2024-11-20 15:46:04.050995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.157 [2024-11-20 15:46:04.089031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.157 [2024-11-20 15:46:04.089087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:18.157 [2024-11-20 15:46:04.089104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.989 ms 00:34:18.157 [2024-11-20 15:46:04.089115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.440 [2024-11-20 15:46:04.129724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.440 [2024-11-20 15:46:04.129780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:18.440 [2024-11-20 15:46:04.129797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.486 ms 00:34:18.440 [2024-11-20 15:46:04.129808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.440 [2024-11-20 15:46:04.129846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:18.440 [2024-11-20 15:46:04.129864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:18.440 [2024-11-20 15:46:04.129887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:18.440 [2024-11-20 15:46:04.129899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.129998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:18.440 [2024-11-20 15:46:04.130347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:18.441 [2024-11-20 15:46:04.130976] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:18.441 [2024-11-20 15:46:04.130990] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae539e5c-0d15-4f4f-a98d-b97d05826ce0 00:34:18.441 [2024-11-20 15:46:04.131002] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:18.441 [2024-11-20 15:46:04.131012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:18.441 [2024-11-20 15:46:04.131022] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:18.441 [2024-11-20 15:46:04.131032] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:18.441 [2024-11-20 15:46:04.131042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:18.441 [2024-11-20 15:46:04.131052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:18.441 [2024-11-20 15:46:04.131090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:18.441 [2024-11-20 15:46:04.131100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:18.441 [2024-11-20 15:46:04.131110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:18.441 [2024-11-20 15:46:04.131121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.441 [2024-11-20 15:46:04.131133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:18.441 [2024-11-20 15:46:04.131144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:34:18.441 [2024-11-20 15:46:04.131155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.152485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.441 [2024-11-20 15:46:04.152547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:18.441 [2024-11-20 15:46:04.152563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.253 ms 00:34:18.441 [2024-11-20 15:46:04.152590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.153258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.441 [2024-11-20 15:46:04.153279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:18.441 [2024-11-20 15:46:04.153300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:34:18.441 [2024-11-20 15:46:04.153312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.209592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.441 [2024-11-20 15:46:04.209857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:18.441 [2024-11-20 15:46:04.209886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.441 [2024-11-20 15:46:04.209898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.209981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.441 [2024-11-20 15:46:04.209993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:18.441 [2024-11-20 15:46:04.210013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.441 [2024-11-20 15:46:04.210024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.210116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.441 [2024-11-20 15:46:04.210130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:18.441 [2024-11-20 15:46:04.210142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.441 [2024-11-20 15:46:04.210153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.210172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.441 [2024-11-20 15:46:04.210184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:18.441 [2024-11-20 15:46:04.210195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.441 [2024-11-20 15:46:04.210211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.441 [2024-11-20 15:46:04.338285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.442 [2024-11-20 15:46:04.338352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:18.442 [2024-11-20 15:46:04.338368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.442 [2024-11-20 15:46:04.338380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.699 [2024-11-20 15:46:04.443462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.699 [2024-11-20 15:46:04.443527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:18.699 [2024-11-20 15:46:04.443549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.699 [2024-11-20 15:46:04.443560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.699 [2024-11-20 15:46:04.443670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.699 [2024-11-20 15:46:04.443684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:18.700 [2024-11-20 15:46:04.443695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.443705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.443760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.700 [2024-11-20 15:46:04.443772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:18.700 [2024-11-20 15:46:04.443784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.443794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.443919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.700 [2024-11-20 15:46:04.443933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:18.700 [2024-11-20 15:46:04.443943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.443953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.443988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.700 [2024-11-20 15:46:04.444001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:18.700 [2024-11-20 15:46:04.444011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.444022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.444063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.700 [2024-11-20 15:46:04.444075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:18.700 [2024-11-20 15:46:04.444085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.444095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.444136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.700 [2024-11-20 15:46:04.444148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:18.700 [2024-11-20 15:46:04.444158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.700 [2024-11-20 15:46:04.444168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.700 [2024-11-20 15:46:04.444289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 637.758 ms, result 0 00:34:19.633 00:34:19.633 00:34:19.633 15:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:22.165 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80948 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80948 ']' 00:34:22.165 15:46:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80948 00:34:22.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80948) - No such process 00:34:22.166 Process with pid 80948 is not found 00:34:22.166 15:46:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80948 is not found' 00:34:22.166 15:46:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:34:22.424 Remove shared memory files 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:22.424 ************************************ 00:34:22.424 END TEST ftl_dirty_shutdown 00:34:22.424 ************************************ 00:34:22.424 00:34:22.424 real 3m18.340s 00:34:22.424 user 3m41.861s 00:34:22.424 sys 0m38.349s 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.424 15:46:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:22.424 15:46:08 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:22.424 15:46:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:22.424 15:46:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.424 15:46:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:22.424 ************************************ 00:34:22.424 START TEST ftl_upgrade_shutdown 00:34:22.424 ************************************ 00:34:22.424 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:22.424 * Looking for test storage... 00:34:22.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:22.424 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.682 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.683 --rc genhtml_branch_coverage=1 00:34:22.683 --rc genhtml_function_coverage=1 00:34:22.683 --rc genhtml_legend=1 00:34:22.683 --rc geninfo_all_blocks=1 00:34:22.683 --rc geninfo_unexecuted_blocks=1 00:34:22.683 00:34:22.683 ' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.683 --rc genhtml_branch_coverage=1 00:34:22.683 --rc genhtml_function_coverage=1 00:34:22.683 --rc genhtml_legend=1 00:34:22.683 --rc geninfo_all_blocks=1 00:34:22.683 --rc geninfo_unexecuted_blocks=1 00:34:22.683 00:34:22.683 ' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.683 --rc genhtml_branch_coverage=1 00:34:22.683 --rc genhtml_function_coverage=1 00:34:22.683 --rc genhtml_legend=1 00:34:22.683 --rc geninfo_all_blocks=1 00:34:22.683 --rc geninfo_unexecuted_blocks=1 00:34:22.683 00:34:22.683 ' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.683 --rc genhtml_branch_coverage=1 00:34:22.683 --rc genhtml_function_coverage=1 00:34:22.683 --rc genhtml_legend=1 00:34:22.683 --rc geninfo_all_blocks=1 00:34:22.683 --rc geninfo_unexecuted_blocks=1 00:34:22.683 00:34:22.683 ' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83047 00:34:22.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83047 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83047 ']' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.683 15:46:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:22.942 [2024-11-20 15:46:08.686331] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:22.942 [2024-11-20 15:46:08.686787] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83047 ] 00:34:22.942 [2024-11-20 15:46:08.882113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.200 [2024-11-20 15:46:09.020607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:34:24.135 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:34:24.698 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:24.956 { 00:34:24.956 "name": "basen1", 00:34:24.956 "aliases": [ 00:34:24.956 "4d724e2f-5375-4d89-a111-8b729a491ac8" 00:34:24.956 ], 00:34:24.956 "product_name": "NVMe disk", 00:34:24.956 "block_size": 4096, 00:34:24.956 "num_blocks": 1310720, 00:34:24.956 "uuid": "4d724e2f-5375-4d89-a111-8b729a491ac8", 00:34:24.956 "numa_id": -1, 00:34:24.956 "assigned_rate_limits": { 00:34:24.956 "rw_ios_per_sec": 0, 00:34:24.956 "rw_mbytes_per_sec": 0, 00:34:24.956 "r_mbytes_per_sec": 0, 00:34:24.956 "w_mbytes_per_sec": 0 00:34:24.956 }, 00:34:24.956 "claimed": true, 00:34:24.956 "claim_type": "read_many_write_one", 00:34:24.956 "zoned": false, 00:34:24.956 "supported_io_types": { 00:34:24.956 "read": true, 00:34:24.956 "write": true, 00:34:24.956 "unmap": true, 00:34:24.956 "flush": true, 00:34:24.956 "reset": true, 00:34:24.956 "nvme_admin": true, 00:34:24.956 "nvme_io": true, 00:34:24.956 "nvme_io_md": false, 00:34:24.956 "write_zeroes": true, 00:34:24.956 "zcopy": false, 00:34:24.956 "get_zone_info": false, 00:34:24.956 "zone_management": false, 00:34:24.956 "zone_append": false, 00:34:24.956 "compare": true, 00:34:24.956 "compare_and_write": false, 00:34:24.956 "abort": true, 00:34:24.956 "seek_hole": false, 00:34:24.956 "seek_data": false, 00:34:24.956 "copy": true, 00:34:24.956 "nvme_iov_md": false 00:34:24.956 }, 00:34:24.956 "driver_specific": { 00:34:24.956 "nvme": [ 00:34:24.956 { 00:34:24.956 "pci_address": "0000:00:11.0", 00:34:24.956 "trid": { 00:34:24.956 "trtype": "PCIe", 00:34:24.956 "traddr": "0000:00:11.0" 00:34:24.956 }, 00:34:24.956 "ctrlr_data": { 00:34:24.956 "cntlid": 0, 00:34:24.956 "vendor_id": "0x1b36", 00:34:24.956 "model_number": "QEMU NVMe Ctrl", 00:34:24.956 "serial_number": "12341", 00:34:24.956 "firmware_revision": "8.0.0", 00:34:24.956 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:24.956 "oacs": { 00:34:24.956 "security": 0, 00:34:24.956 "format": 1, 00:34:24.956 "firmware": 0, 00:34:24.956 "ns_manage": 1 00:34:24.956 }, 00:34:24.956 "multi_ctrlr": false, 00:34:24.956 "ana_reporting": false 00:34:24.956 }, 00:34:24.956 "vs": { 00:34:24.956 "nvme_version": "1.4" 00:34:24.956 }, 00:34:24.956 "ns_data": { 00:34:24.956 "id": 1, 00:34:24.956 "can_share": false 00:34:24.956 } 00:34:24.956 } 00:34:24.956 ], 00:34:24.956 "mp_policy": "active_passive" 00:34:24.956 } 00:34:24.956 } 00:34:24.956 ]' 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:24.956 15:46:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:25.522 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=66311070-ed07-46f8-a299-dcf49968d370 00:34:25.522 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:34:25.522 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66311070-ed07-46f8-a299-dcf49968d370 00:34:25.780 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:34:26.039 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=42be9348-94ea-42a5-9f2f-1e0b02b7f36f 00:34:26.039 15:46:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 42be9348-94ea-42a5-9f2f-1e0b02b7f36f 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=891f8e8d-cda4-497d-a767-6e1a89a0e0ea 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 891f8e8d-cda4-497d-a767-6e1a89a0e0ea ]] 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 891f8e8d-cda4-497d-a767-6e1a89a0e0ea 5120 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=891f8e8d-cda4-497d-a767-6e1a89a0e0ea 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 891f8e8d-cda4-497d-a767-6e1a89a0e0ea 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=891f8e8d-cda4-497d-a767-6e1a89a0e0ea 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:34:26.296 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 891f8e8d-cda4-497d-a767-6e1a89a0e0ea 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:26.563 { 00:34:26.563 "name": "891f8e8d-cda4-497d-a767-6e1a89a0e0ea", 00:34:26.563 "aliases": [ 00:34:26.563 "lvs/basen1p0" 00:34:26.563 ], 00:34:26.563 "product_name": "Logical Volume", 00:34:26.563 "block_size": 4096, 00:34:26.563 "num_blocks": 5242880, 00:34:26.563 "uuid": "891f8e8d-cda4-497d-a767-6e1a89a0e0ea", 00:34:26.563 "assigned_rate_limits": { 00:34:26.563 "rw_ios_per_sec": 0, 00:34:26.563 "rw_mbytes_per_sec": 0, 00:34:26.563 "r_mbytes_per_sec": 0, 00:34:26.563 "w_mbytes_per_sec": 0 00:34:26.563 }, 00:34:26.563 "claimed": false, 00:34:26.563 "zoned": false, 00:34:26.563 "supported_io_types": { 00:34:26.563 "read": true, 00:34:26.563 "write": true, 00:34:26.563 "unmap": true, 00:34:26.563 "flush": false, 00:34:26.563 "reset": true, 00:34:26.563 "nvme_admin": false, 00:34:26.563 "nvme_io": false, 00:34:26.563 "nvme_io_md": false, 00:34:26.563 "write_zeroes": true, 00:34:26.563 "zcopy": false, 00:34:26.563 "get_zone_info": false, 00:34:26.563 "zone_management": false, 00:34:26.563 "zone_append": false, 00:34:26.563 "compare": false, 00:34:26.563 "compare_and_write": false, 00:34:26.563 "abort": false, 00:34:26.563 "seek_hole": true, 00:34:26.563 "seek_data": true, 00:34:26.563 "copy": false, 00:34:26.563 "nvme_iov_md": false 00:34:26.563 }, 00:34:26.563 "driver_specific": { 00:34:26.563 "lvol": { 00:34:26.563 "lvol_store_uuid": "42be9348-94ea-42a5-9f2f-1e0b02b7f36f", 00:34:26.563 "base_bdev": "basen1", 00:34:26.563 "thin_provision": true, 00:34:26.563 "num_allocated_clusters": 0, 00:34:26.563 "snapshot": false, 00:34:26.563 "clone": false, 00:34:26.563 "esnap_clone": false 00:34:26.563 } 00:34:26.563 } 00:34:26.563 } 00:34:26.563 ]' 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:34:26.563 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:34:26.836 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:34:26.836 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:34:26.836 15:46:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:34:27.094 15:46:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:34:27.094 15:46:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:34:27.094 15:46:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 891f8e8d-cda4-497d-a767-6e1a89a0e0ea -c cachen1p0 --l2p_dram_limit 2 00:34:27.367 [2024-11-20 15:46:13.240011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.240082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:27.367 [2024-11-20 15:46:13.240104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:27.367 [2024-11-20 15:46:13.240116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.240199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.240213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:27.367 [2024-11-20 15:46:13.240230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:34:27.367 [2024-11-20 15:46:13.240241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.240269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:27.367 [2024-11-20 15:46:13.241496] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:27.367 [2024-11-20 15:46:13.241541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.241554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:27.367 [2024-11-20 15:46:13.241580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.273 ms 00:34:27.367 [2024-11-20 15:46:13.241593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.241731] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e141d19c-be18-465e-80a9-1bc6a5ba2fca 00:34:27.367 [2024-11-20 15:46:13.243350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.243398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:34:27.367 [2024-11-20 15:46:13.243414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:34:27.367 [2024-11-20 15:46:13.243429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.251306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.251617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:27.367 [2024-11-20 15:46:13.251645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.805 ms 00:34:27.367 [2024-11-20 15:46:13.251661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.251735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.251753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:27.367 [2024-11-20 15:46:13.251767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:27.367 [2024-11-20 15:46:13.251785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.251868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.251885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:27.367 [2024-11-20 15:46:13.251898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:34:27.367 [2024-11-20 15:46:13.251920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.251951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:27.367 [2024-11-20 15:46:13.257850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.257904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:27.367 [2024-11-20 15:46:13.257924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.904 ms 00:34:27.367 [2024-11-20 15:46:13.257938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.257982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.367 [2024-11-20 15:46:13.257996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:27.367 [2024-11-20 15:46:13.258012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:27.367 [2024-11-20 15:46:13.258024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.367 [2024-11-20 15:46:13.258095] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:34:27.367 [2024-11-20 15:46:13.258247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:27.367 [2024-11-20 15:46:13.258270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:27.367 [2024-11-20 15:46:13.258287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:27.368 [2024-11-20 15:46:13.258305] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:27.368 [2024-11-20 15:46:13.258320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:27.368 [2024-11-20 15:46:13.258336] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:27.368 [2024-11-20 15:46:13.258347] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:27.368 [2024-11-20 15:46:13.258365] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:27.368 [2024-11-20 15:46:13.258377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:27.368 [2024-11-20 15:46:13.258392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.368 [2024-11-20 15:46:13.258404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:27.368 [2024-11-20 15:46:13.258419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:34:27.368 [2024-11-20 15:46:13.258431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.368 [2024-11-20 15:46:13.258519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.368 [2024-11-20 15:46:13.258533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:27.368 [2024-11-20 15:46:13.258550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:34:27.368 [2024-11-20 15:46:13.258588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.368 [2024-11-20 15:46:13.258720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:27.368 [2024-11-20 15:46:13.258737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:27.368 [2024-11-20 15:46:13.258753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:27.368 [2024-11-20 15:46:13.258767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:27.368 [2024-11-20 15:46:13.258803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:27.368 [2024-11-20 15:46:13.258827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:27.368 [2024-11-20 15:46:13.258840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:27.368 [2024-11-20 15:46:13.258850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:27.368 [2024-11-20 15:46:13.258874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:27.368 [2024-11-20 15:46:13.258887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:27.368 [2024-11-20 15:46:13.258910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:27.368 [2024-11-20 15:46:13.258921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:27.368 [2024-11-20 15:46:13.258948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:27.368 [2024-11-20 15:46:13.258980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.258991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:27.368 [2024-11-20 15:46:13.259005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:27.368 [2024-11-20 15:46:13.259041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:27.368 [2024-11-20 15:46:13.259079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:27.368 [2024-11-20 15:46:13.259115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:27.368 [2024-11-20 15:46:13.259155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:27.368 [2024-11-20 15:46:13.259190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:27.368 [2024-11-20 15:46:13.259228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:27.368 [2024-11-20 15:46:13.259263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:27.368 [2024-11-20 15:46:13.259277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259287] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:27.368 [2024-11-20 15:46:13.259302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:27.368 [2024-11-20 15:46:13.259313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:27.368 [2024-11-20 15:46:13.259342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:27.368 [2024-11-20 15:46:13.259359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:27.368 [2024-11-20 15:46:13.259370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:27.368 [2024-11-20 15:46:13.259385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:27.368 [2024-11-20 15:46:13.259395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:27.368 [2024-11-20 15:46:13.259409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:27.368 [2024-11-20 15:46:13.259425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:27.368 [2024-11-20 15:46:13.259442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:27.368 [2024-11-20 15:46:13.259474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:27.368 [2024-11-20 15:46:13.259513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:27.368 [2024-11-20 15:46:13.259529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:27.368 [2024-11-20 15:46:13.259541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:27.368 [2024-11-20 15:46:13.259556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.259952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.260038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:27.368 [2024-11-20 15:46:13.260097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:27.369 [2024-11-20 15:46:13.260211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:27.369 [2024-11-20 15:46:13.260227] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:27.369 [2024-11-20 15:46:13.260245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:27.369 [2024-11-20 15:46:13.260259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:27.369 [2024-11-20 15:46:13.260275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:27.369 [2024-11-20 15:46:13.260287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:27.369 [2024-11-20 15:46:13.260302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:27.369 [2024-11-20 15:46:13.260318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:27.369 [2024-11-20 15:46:13.260333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:27.369 [2024-11-20 15:46:13.260347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.680 ms 00:34:27.369 [2024-11-20 15:46:13.260362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:27.369 [2024-11-20 15:46:13.260426] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:27.369 [2024-11-20 15:46:13.260448] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:31.553 [2024-11-20 15:46:17.078513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.078615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:31.553 [2024-11-20 15:46:17.078636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3818.064 ms 00:34:31.553 [2024-11-20 15:46:17.078676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.123596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.123949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:31.553 [2024-11-20 15:46:17.123980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.462 ms 00:34:31.553 [2024-11-20 15:46:17.123996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.124128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.124146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:31.553 [2024-11-20 15:46:17.124159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:34:31.553 [2024-11-20 15:46:17.124182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.178253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.178326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:31.553 [2024-11-20 15:46:17.178343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.018 ms 00:34:31.553 [2024-11-20 15:46:17.178376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.178437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.178459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:31.553 [2024-11-20 15:46:17.178472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:31.553 [2024-11-20 15:46:17.178487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.179071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.179095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:31.553 [2024-11-20 15:46:17.179109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:34:31.553 [2024-11-20 15:46:17.179123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.179185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.179202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:31.553 [2024-11-20 15:46:17.179217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:34:31.553 [2024-11-20 15:46:17.179234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.202119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.202450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:31.553 [2024-11-20 15:46:17.202480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.859 ms 00:34:31.553 [2024-11-20 15:46:17.202497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.231345] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:31.553 [2024-11-20 15:46:17.232628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.232660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:31.553 [2024-11-20 15:46:17.232682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.942 ms 00:34:31.553 [2024-11-20 15:46:17.232695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.273682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.273763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:34:31.553 [2024-11-20 15:46:17.273786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.910 ms 00:34:31.553 [2024-11-20 15:46:17.273798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.273957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.273976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:31.553 [2024-11-20 15:46:17.273995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:34:31.553 [2024-11-20 15:46:17.274007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.319346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.319429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:34:31.553 [2024-11-20 15:46:17.319454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.238 ms 00:34:31.553 [2024-11-20 15:46:17.319468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.365675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.365754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:34:31.553 [2024-11-20 15:46:17.365776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.099 ms 00:34:31.553 [2024-11-20 15:46:17.365788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.367999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.368048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:31.553 [2024-11-20 15:46:17.368067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.767 ms 00:34:31.553 [2024-11-20 15:46:17.368085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.553 [2024-11-20 15:46:17.507566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.553 [2024-11-20 15:46:17.507654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:34:31.553 [2024-11-20 15:46:17.507683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 139.365 ms 00:34:31.553 [2024-11-20 15:46:17.507697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.555516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.812 [2024-11-20 15:46:17.555611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:34:31.812 [2024-11-20 15:46:17.555647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.631 ms 00:34:31.812 [2024-11-20 15:46:17.555660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.603141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.812 [2024-11-20 15:46:17.603215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:34:31.812 [2024-11-20 15:46:17.603238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.355 ms 00:34:31.812 [2024-11-20 15:46:17.603251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.650829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.812 [2024-11-20 15:46:17.650908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:31.812 [2024-11-20 15:46:17.650931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.467 ms 00:34:31.812 [2024-11-20 15:46:17.650943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.651042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.812 [2024-11-20 15:46:17.651057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:31.812 [2024-11-20 15:46:17.651078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:31.812 [2024-11-20 15:46:17.651090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.651241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:31.812 [2024-11-20 15:46:17.651256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:31.812 [2024-11-20 15:46:17.651276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:34:31.812 [2024-11-20 15:46:17.651288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:31.812 [2024-11-20 15:46:17.652871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4412.215 ms, result 0 00:34:31.812 { 00:34:31.812 "name": "ftl", 00:34:31.812 "uuid": "e141d19c-be18-465e-80a9-1bc6a5ba2fca" 00:34:31.812 } 00:34:31.812 15:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:34:32.068 [2024-11-20 15:46:18.023832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.325 15:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:34:32.583 15:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:34:32.583 [2024-11-20 15:46:18.520356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:32.841 15:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:34:32.841 [2024-11-20 15:46:18.763672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:32.841 15:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:33.407 Fill FTL, iteration 1 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83186 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83186 /var/tmp/spdk.tgt.sock 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83186 ']' 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:34:33.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.407 15:46:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:33.407 [2024-11-20 15:46:19.277036] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:33.407 [2024-11-20 15:46:19.277185] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83186 ] 00:34:33.665 [2024-11-20 15:46:19.460640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.665 [2024-11-20 15:46:19.602251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.038 15:46:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.038 15:46:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:35.038 15:46:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:34:35.038 ftln1 00:34:35.038 15:46:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:34:35.038 15:46:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83186 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83186 ']' 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83186 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83186 00:34:35.604 killing process with pid 83186 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83186' 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83186 00:34:35.604 15:46:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83186 00:34:38.132 15:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:34:38.132 15:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:38.132 [2024-11-20 15:46:24.029107] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:38.132 [2024-11-20 15:46:24.029271] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83251 ] 00:34:38.391 [2024-11-20 15:46:24.210538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.391 [2024-11-20 15:46:24.346266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.295  [2024-11-20T15:46:27.187Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-20T15:46:28.121Z] Copying: 400/1024 [MB] (201 MBps) [2024-11-20T15:46:29.057Z] Copying: 609/1024 [MB] (209 MBps) [2024-11-20T15:46:29.997Z] Copying: 826/1024 [MB] (217 MBps) [2024-11-20T15:46:31.373Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:34:45.415 00:34:45.415 Calculate MD5 checksum, iteration 1 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:45.415 15:46:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:45.415 [2024-11-20 15:46:31.224100] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:45.415 [2024-11-20 15:46:31.224238] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83325 ] 00:34:45.672 [2024-11-20 15:46:31.405163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.672 [2024-11-20 15:46:31.543229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.575  [2024-11-20T15:46:33.791Z] Copying: 627/1024 [MB] (627 MBps) [2024-11-20T15:46:34.726Z] Copying: 1024/1024 [MB] (average 618 MBps) 00:34:48.768 00:34:48.768 15:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:34:48.768 15:46:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:34:51.306 Fill FTL, iteration 2 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0ae168b6b6f59132c60e2f91c34e07b5 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:51.306 15:46:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:34:51.306 [2024-11-20 15:46:36.817742] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:51.306 [2024-11-20 15:46:36.818272] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83383 ] 00:34:51.306 [2024-11-20 15:46:37.025392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.306 [2024-11-20 15:46:37.194668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.208  [2024-11-20T15:46:39.732Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-20T15:46:41.110Z] Copying: 442/1024 [MB] (226 MBps) [2024-11-20T15:46:42.042Z] Copying: 663/1024 [MB] (221 MBps) [2024-11-20T15:46:42.609Z] Copying: 883/1024 [MB] (220 MBps) [2024-11-20T15:46:44.030Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:34:58.072 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:34:58.072 Calculate MD5 checksum, iteration 2 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:58.072 15:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:58.072 [2024-11-20 15:46:43.852542] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:58.072 [2024-11-20 15:46:43.852953] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83455 ] 00:34:58.329 [2024-11-20 15:46:44.042420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.329 [2024-11-20 15:46:44.215330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.230  [2024-11-20T15:46:46.755Z] Copying: 590/1024 [MB] (590 MBps) [2024-11-20T15:46:48.653Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:35:02.695 00:35:02.695 15:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:02.695 15:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:04.593 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:04.594 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=30271ada2f8017e5450b5d05381e6fed 00:35:04.594 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:04.594 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:04.594 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:04.852 [2024-11-20 15:46:50.550522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:04.852 [2024-11-20 15:46:50.550598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:04.852 [2024-11-20 15:46:50.550634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:04.852 [2024-11-20 15:46:50.550655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:04.852 [2024-11-20 15:46:50.550690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:04.852 [2024-11-20 15:46:50.550703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:04.852 [2024-11-20 15:46:50.550720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:04.852 [2024-11-20 15:46:50.550731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:04.852 [2024-11-20 15:46:50.550754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:04.852 [2024-11-20 15:46:50.550767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:04.852 [2024-11-20 15:46:50.550778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:04.852 [2024-11-20 15:46:50.550789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:04.852 [2024-11-20 15:46:50.550867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.329 ms, result 0 00:35:04.852 true 00:35:04.852 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:05.110 { 00:35:05.110 "name": "ftl", 00:35:05.110 "properties": [ 00:35:05.110 { 00:35:05.110 "name": "superblock_version", 00:35:05.110 "value": 5, 00:35:05.110 "read-only": true 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "name": "base_device", 00:35:05.110 "bands": [ 00:35:05.110 { 00:35:05.110 "id": 0, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 1, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 2, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 3, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 4, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 5, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 6, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 7, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 8, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 9, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 10, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 11, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 12, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 13, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 14, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 15, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 16, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 17, 00:35:05.110 "state": "FREE", 00:35:05.110 "validity": 0.0 00:35:05.110 } 00:35:05.110 ], 00:35:05.110 "read-only": true 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "name": "cache_device", 00:35:05.110 "type": "bdev", 00:35:05.110 "chunks": [ 00:35:05.110 { 00:35:05.110 "id": 0, 00:35:05.110 "state": "INACTIVE", 00:35:05.110 "utilization": 0.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 1, 00:35:05.110 "state": "CLOSED", 00:35:05.110 "utilization": 1.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 2, 00:35:05.110 "state": "CLOSED", 00:35:05.110 "utilization": 1.0 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 3, 00:35:05.110 "state": "OPEN", 00:35:05.110 "utilization": 0.001953125 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "id": 4, 00:35:05.110 "state": "OPEN", 00:35:05.110 "utilization": 0.0 00:35:05.110 } 00:35:05.110 ], 00:35:05.110 "read-only": true 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "name": "verbose_mode", 00:35:05.110 "value": true, 00:35:05.110 "unit": "", 00:35:05.110 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:05.110 }, 00:35:05.110 { 00:35:05.110 "name": "prep_upgrade_on_shutdown", 00:35:05.110 "value": false, 00:35:05.110 "unit": "", 00:35:05.110 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:05.110 } 00:35:05.110 ] 00:35:05.110 } 00:35:05.110 15:46:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:05.368 [2024-11-20 15:46:51.143201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.368 [2024-11-20 15:46:51.143264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:05.368 [2024-11-20 15:46:51.143282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:05.368 [2024-11-20 15:46:51.143293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.368 [2024-11-20 15:46:51.143322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.368 [2024-11-20 15:46:51.143334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:05.368 [2024-11-20 15:46:51.143346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:05.368 [2024-11-20 15:46:51.143357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.368 [2024-11-20 15:46:51.143379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.368 [2024-11-20 15:46:51.143391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:05.368 [2024-11-20 15:46:51.143402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:05.368 [2024-11-20 15:46:51.143413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.368 [2024-11-20 15:46:51.143479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.268 ms, result 0 00:35:05.368 true 00:35:05.368 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:05.368 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:05.368 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:05.626 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:05.626 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:05.626 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:05.626 [2024-11-20 15:46:51.579635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.626 [2024-11-20 15:46:51.579695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:05.626 [2024-11-20 15:46:51.579712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:05.626 [2024-11-20 15:46:51.579723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.626 [2024-11-20 15:46:51.579764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.626 [2024-11-20 15:46:51.579775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:05.626 [2024-11-20 15:46:51.579786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:05.626 [2024-11-20 15:46:51.579796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.626 [2024-11-20 15:46:51.579817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:05.626 [2024-11-20 15:46:51.579828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:05.626 [2024-11-20 15:46:51.579838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:05.626 [2024-11-20 15:46:51.579848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:05.626 [2024-11-20 15:46:51.579910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.269 ms, result 0 00:35:05.885 true 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:05.885 { 00:35:05.885 "name": "ftl", 00:35:05.885 "properties": [ 00:35:05.885 { 00:35:05.885 "name": "superblock_version", 00:35:05.885 "value": 5, 00:35:05.885 "read-only": true 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "name": "base_device", 00:35:05.885 "bands": [ 00:35:05.885 { 00:35:05.885 "id": 0, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 1, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 2, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 3, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 4, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 5, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 6, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 7, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 8, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 9, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 10, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 11, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 12, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 13, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 14, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 15, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 16, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 17, 00:35:05.885 "state": "FREE", 00:35:05.885 "validity": 0.0 00:35:05.885 } 00:35:05.885 ], 00:35:05.885 "read-only": true 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "name": "cache_device", 00:35:05.885 "type": "bdev", 00:35:05.885 "chunks": [ 00:35:05.885 { 00:35:05.885 "id": 0, 00:35:05.885 "state": "INACTIVE", 00:35:05.885 "utilization": 0.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 1, 00:35:05.885 "state": "CLOSED", 00:35:05.885 "utilization": 1.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 2, 00:35:05.885 "state": "CLOSED", 00:35:05.885 "utilization": 1.0 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 3, 00:35:05.885 "state": "OPEN", 00:35:05.885 "utilization": 0.001953125 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "id": 4, 00:35:05.885 "state": "OPEN", 00:35:05.885 "utilization": 0.0 00:35:05.885 } 00:35:05.885 ], 00:35:05.885 "read-only": true 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "name": "verbose_mode", 00:35:05.885 "value": true, 00:35:05.885 "unit": "", 00:35:05.885 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:05.885 }, 00:35:05.885 { 00:35:05.885 "name": "prep_upgrade_on_shutdown", 00:35:05.885 "value": true, 00:35:05.885 "unit": "", 00:35:05.885 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:05.885 } 00:35:05.885 ] 00:35:05.885 } 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83047 ]] 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83047 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83047 ']' 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83047 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.885 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83047 00:35:06.143 killing process with pid 83047 00:35:06.143 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:06.143 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:06.143 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83047' 00:35:06.143 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83047 00:35:06.143 15:46:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83047 00:35:07.528 [2024-11-20 15:46:53.056784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:07.528 [2024-11-20 15:46:53.078217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:07.528 [2024-11-20 15:46:53.078282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:07.528 [2024-11-20 15:46:53.078299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:07.528 [2024-11-20 15:46:53.078311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:07.528 [2024-11-20 15:46:53.078337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:07.528 [2024-11-20 15:46:53.082852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:07.528 [2024-11-20 15:46:53.082889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:07.528 [2024-11-20 15:46:53.082905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.495 ms 00:35:07.528 [2024-11-20 15:46:53.082917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.696695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.696960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:15.637 [2024-11-20 15:47:00.696990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7613.695 ms 00:35:15.637 [2024-11-20 15:47:00.697008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.698184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.698214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:15.637 [2024-11-20 15:47:00.698228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:35:15.637 [2024-11-20 15:47:00.698240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.699377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.699399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:15.637 [2024-11-20 15:47:00.699411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:35:15.637 [2024-11-20 15:47:00.699421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.715033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.715073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:15.637 [2024-11-20 15:47:00.715087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.562 ms 00:35:15.637 [2024-11-20 15:47:00.715098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.725194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.725236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:15.637 [2024-11-20 15:47:00.725252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.058 ms 00:35:15.637 [2024-11-20 15:47:00.725263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.725369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.725383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:15.637 [2024-11-20 15:47:00.725406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:35:15.637 [2024-11-20 15:47:00.725423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.739986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.740024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:35:15.637 [2024-11-20 15:47:00.740037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.544 ms 00:35:15.637 [2024-11-20 15:47:00.740047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.754819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.755053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:35:15.637 [2024-11-20 15:47:00.755077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.731 ms 00:35:15.637 [2024-11-20 15:47:00.755088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.770852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.770911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:15.637 [2024-11-20 15:47:00.770928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.711 ms 00:35:15.637 [2024-11-20 15:47:00.770939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.788479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.637 [2024-11-20 15:47:00.788540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:15.637 [2024-11-20 15:47:00.788556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.431 ms 00:35:15.637 [2024-11-20 15:47:00.788581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.637 [2024-11-20 15:47:00.788625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:15.637 [2024-11-20 15:47:00.788644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:15.637 [2024-11-20 15:47:00.788657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:15.637 [2024-11-20 15:47:00.788689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:15.637 [2024-11-20 15:47:00.788700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:15.637 [2024-11-20 15:47:00.788889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:15.637 [2024-11-20 15:47:00.788900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e141d19c-be18-465e-80a9-1bc6a5ba2fca 00:35:15.637 [2024-11-20 15:47:00.788912] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:15.637 [2024-11-20 15:47:00.788922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:35:15.637 [2024-11-20 15:47:00.788933] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:35:15.637 [2024-11-20 15:47:00.788944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:35:15.637 [2024-11-20 15:47:00.788955] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:15.637 [2024-11-20 15:47:00.788966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:15.637 [2024-11-20 15:47:00.788983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:15.637 [2024-11-20 15:47:00.788993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:15.637 [2024-11-20 15:47:00.789003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:15.637 [2024-11-20 15:47:00.789014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.638 [2024-11-20 15:47:00.789037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:15.638 [2024-11-20 15:47:00.789052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:35:15.638 [2024-11-20 15:47:00.789063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.811094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.638 [2024-11-20 15:47:00.811151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:15.638 [2024-11-20 15:47:00.811167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.992 ms 00:35:15.638 [2024-11-20 15:47:00.811179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.811778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.638 [2024-11-20 15:47:00.811794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:15.638 [2024-11-20 15:47:00.811805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:35:15.638 [2024-11-20 15:47:00.811816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.879203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:00.879266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:15.638 [2024-11-20 15:47:00.879281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:00.879297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.879353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:00.879364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:15.638 [2024-11-20 15:47:00.879375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:00.879385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.879501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:00.879515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:15.638 [2024-11-20 15:47:00.879526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:00.879537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:00.879560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:00.879595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:15.638 [2024-11-20 15:47:00.879606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:00.879616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.006467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.006789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:15.638 [2024-11-20 15:47:01.006816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.006836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.117725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:15.638 [2024-11-20 15:47:01.118036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:15.638 [2024-11-20 15:47:01.118197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:15.638 [2024-11-20 15:47:01.118309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:15.638 [2024-11-20 15:47:01.118471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:15.638 [2024-11-20 15:47:01.118544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:15.638 [2024-11-20 15:47:01.118656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:15.638 [2024-11-20 15:47:01.118738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:15.638 [2024-11-20 15:47:01.118748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:15.638 [2024-11-20 15:47:01.118762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.638 [2024-11-20 15:47:01.118893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8040.603 ms, result 0 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:19.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83671 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83671 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83671 ']' 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.883 15:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:19.883 [2024-11-20 15:47:05.344815] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:19.883 [2024-11-20 15:47:05.345190] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83671 ] 00:35:19.883 [2024-11-20 15:47:05.526820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.883 [2024-11-20 15:47:05.663421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.818 [2024-11-20 15:47:06.709981] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:20.818 [2024-11-20 15:47:06.710236] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:21.076 [2024-11-20 15:47:06.858238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.076 [2024-11-20 15:47:06.858504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:21.077 [2024-11-20 15:47:06.858729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:21.077 [2024-11-20 15:47:06.858779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.858985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.859035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:21.077 [2024-11-20 15:47:06.859136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:35:21.077 [2024-11-20 15:47:06.859178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.859292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:21.077 [2024-11-20 15:47:06.860512] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:21.077 [2024-11-20 15:47:06.860703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.860792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:21.077 [2024-11-20 15:47:06.860837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.422 ms 00:35:21.077 [2024-11-20 15:47:06.860917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.862610] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:21.077 [2024-11-20 15:47:06.886191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.886274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:21.077 [2024-11-20 15:47:06.886302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.577 ms 00:35:21.077 [2024-11-20 15:47:06.886316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.886447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.886464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:21.077 [2024-11-20 15:47:06.886476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:35:21.077 [2024-11-20 15:47:06.886488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.894470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.894790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:21.077 [2024-11-20 15:47:06.894819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.838 ms 00:35:21.077 [2024-11-20 15:47:06.894832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.894941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.894957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:21.077 [2024-11-20 15:47:06.894971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:35:21.077 [2024-11-20 15:47:06.894983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.895055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.895069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:21.077 [2024-11-20 15:47:06.895087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:35:21.077 [2024-11-20 15:47:06.895098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.895132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:21.077 [2024-11-20 15:47:06.900737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.900788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:21.077 [2024-11-20 15:47:06.900802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.612 ms 00:35:21.077 [2024-11-20 15:47:06.900820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.900861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.900873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:21.077 [2024-11-20 15:47:06.900885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:21.077 [2024-11-20 15:47:06.900896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.900982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:21.077 [2024-11-20 15:47:06.901011] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:21.077 [2024-11-20 15:47:06.901055] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:21.077 [2024-11-20 15:47:06.901074] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:35:21.077 [2024-11-20 15:47:06.901202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:21.077 [2024-11-20 15:47:06.901226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:21.077 [2024-11-20 15:47:06.901242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:21.077 [2024-11-20 15:47:06.901257] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901288] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:21.077 [2024-11-20 15:47:06.901299] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:21.077 [2024-11-20 15:47:06.901310] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:21.077 [2024-11-20 15:47:06.901321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:21.077 [2024-11-20 15:47:06.901334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.901345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:21.077 [2024-11-20 15:47:06.901357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:35:21.077 [2024-11-20 15:47:06.901367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.901461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.077 [2024-11-20 15:47:06.901473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:21.077 [2024-11-20 15:47:06.901484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:35:21.077 [2024-11-20 15:47:06.901500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.077 [2024-11-20 15:47:06.901631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:21.077 [2024-11-20 15:47:06.901653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:21.077 [2024-11-20 15:47:06.901666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:21.077 [2024-11-20 15:47:06.901699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:21.077 [2024-11-20 15:47:06.901720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:21.077 [2024-11-20 15:47:06.901730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:21.077 [2024-11-20 15:47:06.901740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:21.077 [2024-11-20 15:47:06.901761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:21.077 [2024-11-20 15:47:06.901771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:21.077 [2024-11-20 15:47:06.901796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:21.077 [2024-11-20 15:47:06.901807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:21.077 [2024-11-20 15:47:06.901827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:21.077 [2024-11-20 15:47:06.901838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.901848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:21.077 [2024-11-20 15:47:06.901858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:21.077 [2024-11-20 15:47:06.901869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:21.077 [2024-11-20 15:47:06.901889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:21.077 [2024-11-20 15:47:06.901899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:21.077 [2024-11-20 15:47:06.901931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:21.077 [2024-11-20 15:47:06.901941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:21.077 [2024-11-20 15:47:06.901962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:21.077 [2024-11-20 15:47:06.901972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:21.077 [2024-11-20 15:47:06.901982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:21.077 [2024-11-20 15:47:06.901992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:21.077 [2024-11-20 15:47:06.902003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.077 [2024-11-20 15:47:06.902013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:21.078 [2024-11-20 15:47:06.902023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:21.078 [2024-11-20 15:47:06.902033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.078 [2024-11-20 15:47:06.902043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:21.078 [2024-11-20 15:47:06.902053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:21.078 [2024-11-20 15:47:06.902063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.078 [2024-11-20 15:47:06.902073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:21.078 [2024-11-20 15:47:06.902083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:21.078 [2024-11-20 15:47:06.902093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.078 [2024-11-20 15:47:06.902103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:21.078 [2024-11-20 15:47:06.902114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:21.078 [2024-11-20 15:47:06.902125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:21.078 [2024-11-20 15:47:06.902136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:21.078 [2024-11-20 15:47:06.902152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:21.078 [2024-11-20 15:47:06.902163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:21.078 [2024-11-20 15:47:06.902173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:21.078 [2024-11-20 15:47:06.902184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:21.078 [2024-11-20 15:47:06.902194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:21.078 [2024-11-20 15:47:06.902204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:21.078 [2024-11-20 15:47:06.902216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:21.078 [2024-11-20 15:47:06.902229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:21.078 [2024-11-20 15:47:06.902253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:21.078 [2024-11-20 15:47:06.902307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:21.078 [2024-11-20 15:47:06.902319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:21.078 [2024-11-20 15:47:06.902331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:21.078 [2024-11-20 15:47:06.902343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:21.078 [2024-11-20 15:47:06.902426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:21.078 [2024-11-20 15:47:06.902438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:21.078 [2024-11-20 15:47:06.902464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:21.078 [2024-11-20 15:47:06.902477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:21.078 [2024-11-20 15:47:06.902488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:21.078 [2024-11-20 15:47:06.902501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:21.078 [2024-11-20 15:47:06.902512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:21.078 [2024-11-20 15:47:06.902524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.959 ms 00:35:21.078 [2024-11-20 15:47:06.902537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:21.078 [2024-11-20 15:47:06.902616] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:21.078 [2024-11-20 15:47:06.902632] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:23.604 [2024-11-20 15:47:09.169702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.604 [2024-11-20 15:47:09.169784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:23.604 [2024-11-20 15:47:09.169807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2267.071 ms 00:35:23.604 [2024-11-20 15:47:09.169820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.604 [2024-11-20 15:47:09.214207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.604 [2024-11-20 15:47:09.214274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:23.604 [2024-11-20 15:47:09.214292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.980 ms 00:35:23.604 [2024-11-20 15:47:09.214322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.604 [2024-11-20 15:47:09.214464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.214486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:23.605 [2024-11-20 15:47:09.214500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:23.605 [2024-11-20 15:47:09.214512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.271330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.271561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:23.605 [2024-11-20 15:47:09.271618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.761 ms 00:35:23.605 [2024-11-20 15:47:09.271640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.271717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.271731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:23.605 [2024-11-20 15:47:09.271744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:23.605 [2024-11-20 15:47:09.271757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.272311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.272329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:23.605 [2024-11-20 15:47:09.272343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:35:23.605 [2024-11-20 15:47:09.272355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.272413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.272426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:23.605 [2024-11-20 15:47:09.272440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:35:23.605 [2024-11-20 15:47:09.272452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.296907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.296970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:23.605 [2024-11-20 15:47:09.296988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.425 ms 00:35:23.605 [2024-11-20 15:47:09.297000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.328959] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:23.605 [2024-11-20 15:47:09.329292] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:23.605 [2024-11-20 15:47:09.329323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.329336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:35:23.605 [2024-11-20 15:47:09.329351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.140 ms 00:35:23.605 [2024-11-20 15:47:09.329363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.354904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.355229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:35:23.605 [2024-11-20 15:47:09.355259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.465 ms 00:35:23.605 [2024-11-20 15:47:09.355272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.377736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.378082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:35:23.605 [2024-11-20 15:47:09.378111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.326 ms 00:35:23.605 [2024-11-20 15:47:09.378125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.400807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.401063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:35:23.605 [2024-11-20 15:47:09.401089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.581 ms 00:35:23.605 [2024-11-20 15:47:09.401102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.402098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.402140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:23.605 [2024-11-20 15:47:09.402155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.789 ms 00:35:23.605 [2024-11-20 15:47:09.402167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.507385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.507478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:23.605 [2024-11-20 15:47:09.507498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.178 ms 00:35:23.605 [2024-11-20 15:47:09.507511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.523063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:23.605 [2024-11-20 15:47:09.524308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.524345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:23.605 [2024-11-20 15:47:09.524364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.690 ms 00:35:23.605 [2024-11-20 15:47:09.524377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.524524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.524544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:35:23.605 [2024-11-20 15:47:09.524558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:23.605 [2024-11-20 15:47:09.524591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.524672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.524687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:23.605 [2024-11-20 15:47:09.524700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:35:23.605 [2024-11-20 15:47:09.524713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.524743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.524756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:23.605 [2024-11-20 15:47:09.524774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:23.605 [2024-11-20 15:47:09.524785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.605 [2024-11-20 15:47:09.524823] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:23.605 [2024-11-20 15:47:09.524837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.605 [2024-11-20 15:47:09.524849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:23.605 [2024-11-20 15:47:09.524862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:23.605 [2024-11-20 15:47:09.524873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.863 [2024-11-20 15:47:09.568825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.863 [2024-11-20 15:47:09.568917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:23.863 [2024-11-20 15:47:09.568936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.919 ms 00:35:23.863 [2024-11-20 15:47:09.568949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.863 [2024-11-20 15:47:09.569084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:23.863 [2024-11-20 15:47:09.569099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:23.863 [2024-11-20 15:47:09.569118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:35:23.863 [2024-11-20 15:47:09.569131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:23.863 [2024-11-20 15:47:09.570664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2711.822 ms, result 0 00:35:23.863 [2024-11-20 15:47:09.585329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.863 [2024-11-20 15:47:09.601373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:23.863 [2024-11-20 15:47:09.611907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:24.119 15:47:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.119 15:47:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:24.119 15:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:24.119 15:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:24.119 15:47:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:24.376 [2024-11-20 15:47:10.200364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:24.376 [2024-11-20 15:47:10.200433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:24.376 [2024-11-20 15:47:10.200452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:24.376 [2024-11-20 15:47:10.200470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:24.376 [2024-11-20 15:47:10.200504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:24.376 [2024-11-20 15:47:10.200518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:24.376 [2024-11-20 15:47:10.200530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:24.376 [2024-11-20 15:47:10.200542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:24.376 [2024-11-20 15:47:10.200587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:24.376 [2024-11-20 15:47:10.200601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:24.376 [2024-11-20 15:47:10.200614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:24.376 [2024-11-20 15:47:10.200625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:24.376 [2024-11-20 15:47:10.200699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.327 ms, result 0 00:35:24.376 true 00:35:24.376 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:24.636 { 00:35:24.636 "name": "ftl", 00:35:24.636 "properties": [ 00:35:24.636 { 00:35:24.636 "name": "superblock_version", 00:35:24.636 "value": 5, 00:35:24.636 "read-only": true 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "name": "base_device", 00:35:24.636 "bands": [ 00:35:24.636 { 00:35:24.636 "id": 0, 00:35:24.636 "state": "CLOSED", 00:35:24.636 "validity": 1.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 1, 00:35:24.636 "state": "CLOSED", 00:35:24.636 "validity": 1.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 2, 00:35:24.636 "state": "CLOSED", 00:35:24.636 "validity": 0.007843137254901933 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 3, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 4, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 5, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 6, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 7, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 8, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 9, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 10, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 11, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 12, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 13, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 14, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 15, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 16, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 17, 00:35:24.636 "state": "FREE", 00:35:24.636 "validity": 0.0 00:35:24.636 } 00:35:24.636 ], 00:35:24.636 "read-only": true 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "name": "cache_device", 00:35:24.636 "type": "bdev", 00:35:24.636 "chunks": [ 00:35:24.636 { 00:35:24.636 "id": 0, 00:35:24.636 "state": "INACTIVE", 00:35:24.636 "utilization": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 1, 00:35:24.636 "state": "OPEN", 00:35:24.636 "utilization": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 2, 00:35:24.636 "state": "OPEN", 00:35:24.636 "utilization": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 3, 00:35:24.636 "state": "FREE", 00:35:24.636 "utilization": 0.0 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "id": 4, 00:35:24.636 "state": "FREE", 00:35:24.636 "utilization": 0.0 00:35:24.636 } 00:35:24.636 ], 00:35:24.636 "read-only": true 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "name": "verbose_mode", 00:35:24.636 "value": true, 00:35:24.636 "unit": "", 00:35:24.636 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:24.636 }, 00:35:24.636 { 00:35:24.636 "name": "prep_upgrade_on_shutdown", 00:35:24.636 "value": false, 00:35:24.636 "unit": "", 00:35:24.636 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:24.636 } 00:35:24.636 ] 00:35:24.636 } 00:35:24.636 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:35:24.636 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:24.636 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:24.893 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:35:24.893 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:35:24.893 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:35:24.893 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:35:24.893 15:47:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:25.150 Validate MD5 checksum, iteration 1 00:35:25.150 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:25.151 15:47:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:25.408 [2024-11-20 15:47:11.194672] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:25.408 [2024-11-20 15:47:11.194824] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83747 ] 00:35:25.664 [2024-11-20 15:47:11.375802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.664 [2024-11-20 15:47:11.515783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.562  [2024-11-20T15:47:14.453Z] Copying: 505/1024 [MB] (505 MBps) [2024-11-20T15:47:14.454Z] Copying: 1006/1024 [MB] (501 MBps) [2024-11-20T15:47:16.429Z] Copying: 1024/1024 [MB] (average 504 MBps) 00:35:30.471 00:35:30.471 15:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:35:30.471 15:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:32.367 Validate MD5 checksum, iteration 2 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ae168b6b6f59132c60e2f91c34e07b5 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ae168b6b6f59132c60e2f91c34e07b5 != \0\a\e\1\6\8\b\6\b\6\f\5\9\1\3\2\c\6\0\e\2\f\9\1\c\3\4\e\0\7\b\5 ]] 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:32.367 15:47:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:32.367 [2024-11-20 15:47:18.202248] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:32.367 [2024-11-20 15:47:18.202798] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83815 ] 00:35:32.625 [2024-11-20 15:47:18.397747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.625 [2024-11-20 15:47:18.576333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.526  [2024-11-20T15:47:21.417Z] Copying: 488/1024 [MB] (488 MBps) [2024-11-20T15:47:21.674Z] Copying: 999/1024 [MB] (511 MBps) [2024-11-20T15:47:23.047Z] Copying: 1024/1024 [MB] (average 499 MBps) 00:35:37.089 00:35:37.089 15:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:35:37.089 15:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=30271ada2f8017e5450b5d05381e6fed 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 30271ada2f8017e5450b5d05381e6fed != \3\0\2\7\1\a\d\a\2\f\8\0\1\7\e\5\4\5\0\b\5\d\0\5\3\8\1\e\6\f\e\d ]] 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83671 ]] 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83671 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83894 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83894 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83894 ']' 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.617 15:47:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:39.617 [2024-11-20 15:47:25.251882] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:39.617 [2024-11-20 15:47:25.252260] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83894 ] 00:35:39.617 [2024-11-20 15:47:25.431544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.617 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83671 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:35:39.617 [2024-11-20 15:47:25.566509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.989 [2024-11-20 15:47:26.707005] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:40.989 [2024-11-20 15:47:26.707097] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:40.989 [2024-11-20 15:47:26.857761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.989 [2024-11-20 15:47:26.857829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:40.989 [2024-11-20 15:47:26.857847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:40.989 [2024-11-20 15:47:26.857859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.989 [2024-11-20 15:47:26.857938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.989 [2024-11-20 15:47:26.857954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:40.989 [2024-11-20 15:47:26.857966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:35:40.989 [2024-11-20 15:47:26.857977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.989 [2024-11-20 15:47:26.858003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:40.989 [2024-11-20 15:47:26.859197] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:40.989 [2024-11-20 15:47:26.859240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.989 [2024-11-20 15:47:26.859254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:40.989 [2024-11-20 15:47:26.859267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.241 ms 00:35:40.989 [2024-11-20 15:47:26.859279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.989 [2024-11-20 15:47:26.859777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:40.989 [2024-11-20 15:47:26.888661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.989 [2024-11-20 15:47:26.888958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:40.989 [2024-11-20 15:47:26.888989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.879 ms 00:35:40.989 [2024-11-20 15:47:26.889020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.989 [2024-11-20 15:47:26.906777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.989 [2024-11-20 15:47:26.906852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:40.989 [2024-11-20 15:47:26.906876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:35:40.989 [2024-11-20 15:47:26.906888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.907534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.907557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:40.990 [2024-11-20 15:47:26.907594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:35:40.990 [2024-11-20 15:47:26.907625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.907705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.907722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:40.990 [2024-11-20 15:47:26.907734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:35:40.990 [2024-11-20 15:47:26.907747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.907786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.907801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:40.990 [2024-11-20 15:47:26.907813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:40.990 [2024-11-20 15:47:26.907824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.907858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:40.990 [2024-11-20 15:47:26.913475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.913522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:40.990 [2024-11-20 15:47:26.913537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.624 ms 00:35:40.990 [2024-11-20 15:47:26.913550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.913612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.913627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:40.990 [2024-11-20 15:47:26.913640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:40.990 [2024-11-20 15:47:26.913652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.913711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:40.990 [2024-11-20 15:47:26.913739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:40.990 [2024-11-20 15:47:26.913781] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:40.990 [2024-11-20 15:47:26.913805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:35:40.990 [2024-11-20 15:47:26.913914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:40.990 [2024-11-20 15:47:26.913929] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:40.990 [2024-11-20 15:47:26.913945] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:40.990 [2024-11-20 15:47:26.913960] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:40.990 [2024-11-20 15:47:26.913975] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:40.990 [2024-11-20 15:47:26.913988] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:40.990 [2024-11-20 15:47:26.914000] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:40.990 [2024-11-20 15:47:26.914011] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:40.990 [2024-11-20 15:47:26.914023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:40.990 [2024-11-20 15:47:26.914035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.914051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:40.990 [2024-11-20 15:47:26.914063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.327 ms 00:35:40.990 [2024-11-20 15:47:26.914075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.914170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.990 [2024-11-20 15:47:26.914183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:40.990 [2024-11-20 15:47:26.914195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:35:40.990 [2024-11-20 15:47:26.914207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.990 [2024-11-20 15:47:26.914315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:40.990 [2024-11-20 15:47:26.914330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:40.990 [2024-11-20 15:47:26.914347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:40.990 [2024-11-20 15:47:26.914383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:40.990 [2024-11-20 15:47:26.914405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:40.990 [2024-11-20 15:47:26.914417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:40.990 [2024-11-20 15:47:26.914429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:40.990 [2024-11-20 15:47:26.914451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:40.990 [2024-11-20 15:47:26.914462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:40.990 [2024-11-20 15:47:26.914485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:40.990 [2024-11-20 15:47:26.914496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:40.990 [2024-11-20 15:47:26.914518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:40.990 [2024-11-20 15:47:26.914528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:40.990 [2024-11-20 15:47:26.914551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:40.990 [2024-11-20 15:47:26.914562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:40.990 [2024-11-20 15:47:26.914628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:40.990 [2024-11-20 15:47:26.914655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:40.990 [2024-11-20 15:47:26.914710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:40.990 [2024-11-20 15:47:26.914728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:40.990 [2024-11-20 15:47:26.914763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:40.990 [2024-11-20 15:47:26.914779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:40.990 [2024-11-20 15:47:26.914818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:40.990 [2024-11-20 15:47:26.914836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:40.990 [2024-11-20 15:47:26.914874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:40.990 [2024-11-20 15:47:26.914894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:40.990 [2024-11-20 15:47:26.914948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.914988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:40.990 [2024-11-20 15:47:26.915009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:40.990 [2024-11-20 15:47:26.915027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.915047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:40.990 [2024-11-20 15:47:26.915067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:40.990 [2024-11-20 15:47:26.915079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:40.990 [2024-11-20 15:47:26.915100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:40.990 [2024-11-20 15:47:26.915124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:40.990 [2024-11-20 15:47:26.915146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:40.990 [2024-11-20 15:47:26.915167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:40.990 [2024-11-20 15:47:26.915186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:40.990 [2024-11-20 15:47:26.915206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:40.990 [2024-11-20 15:47:26.915226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:40.990 [2024-11-20 15:47:26.915249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:40.990 [2024-11-20 15:47:26.915276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.990 [2024-11-20 15:47:26.915301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:40.990 [2024-11-20 15:47:26.915324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:40.990 [2024-11-20 15:47:26.915347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:40.990 [2024-11-20 15:47:26.915368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:40.990 [2024-11-20 15:47:26.915391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:40.990 [2024-11-20 15:47:26.915414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:40.991 [2024-11-20 15:47:26.915436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:40.991 [2024-11-20 15:47:26.915456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.915552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:40.991 [2024-11-20 15:47:26.915564] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:40.991 [2024-11-20 15:47:26.915578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.916020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:40.991 [2024-11-20 15:47:26.916103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:40.991 [2024-11-20 15:47:26.916163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:40.991 [2024-11-20 15:47:26.916272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:40.991 [2024-11-20 15:47:26.916337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.991 [2024-11-20 15:47:26.916465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:40.991 [2024-11-20 15:47:26.916523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.086 ms 00:35:40.991 [2024-11-20 15:47:26.916630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.248 [2024-11-20 15:47:26.962385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.248 [2024-11-20 15:47:26.962797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:41.248 [2024-11-20 15:47:26.962950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.598 ms 00:35:41.248 [2024-11-20 15:47:26.963100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.248 [2024-11-20 15:47:26.963247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.248 [2024-11-20 15:47:26.963387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:41.248 [2024-11-20 15:47:26.963526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:35:41.248 [2024-11-20 15:47:26.963614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.248 [2024-11-20 15:47:27.020690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.248 [2024-11-20 15:47:27.021070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:41.248 [2024-11-20 15:47:27.021228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.728 ms 00:35:41.248 [2024-11-20 15:47:27.021362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.248 [2024-11-20 15:47:27.021468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.021490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:41.249 [2024-11-20 15:47:27.021511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:41.249 [2024-11-20 15:47:27.021538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.021775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.021800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:41.249 [2024-11-20 15:47:27.021821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.099 ms 00:35:41.249 [2024-11-20 15:47:27.021840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.021916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.021937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:41.249 [2024-11-20 15:47:27.021957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:35:41.249 [2024-11-20 15:47:27.021975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.048114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.048355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:41.249 [2024-11-20 15:47:27.048400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.092 ms 00:35:41.249 [2024-11-20 15:47:27.048421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.048634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.048654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:35:41.249 [2024-11-20 15:47:27.048668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:41.249 [2024-11-20 15:47:27.048681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.087525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.087799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:35:41.249 [2024-11-20 15:47:27.087847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.811 ms 00:35:41.249 [2024-11-20 15:47:27.087861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.249 [2024-11-20 15:47:27.106377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.249 [2024-11-20 15:47:27.106446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:41.249 [2024-11-20 15:47:27.106471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.714 ms 00:35:41.249 [2024-11-20 15:47:27.106482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.507 [2024-11-20 15:47:27.238450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.507 [2024-11-20 15:47:27.238852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:41.507 [2024-11-20 15:47:27.238907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 131.831 ms 00:35:41.507 [2024-11-20 15:47:27.238926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.507 [2024-11-20 15:47:27.239176] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:35:41.507 [2024-11-20 15:47:27.239329] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:35:41.507 [2024-11-20 15:47:27.239473] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:35:41.507 [2024-11-20 15:47:27.239637] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:35:41.507 [2024-11-20 15:47:27.239664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.507 [2024-11-20 15:47:27.239681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:35:41.507 [2024-11-20 15:47:27.239699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.625 ms 00:35:41.507 [2024-11-20 15:47:27.239715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.507 [2024-11-20 15:47:27.239856] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:35:41.507 [2024-11-20 15:47:27.239887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.507 [2024-11-20 15:47:27.239912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:35:41.507 [2024-11-20 15:47:27.239929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:35:41.507 [2024-11-20 15:47:27.239945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.507 [2024-11-20 15:47:27.278057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.507 [2024-11-20 15:47:27.278382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:35:41.507 [2024-11-20 15:47:27.278420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.059 ms 00:35:41.508 [2024-11-20 15:47:27.278437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.508 [2024-11-20 15:47:27.302586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.508 [2024-11-20 15:47:27.302668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:35:41.508 [2024-11-20 15:47:27.302691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:35:41.508 [2024-11-20 15:47:27.302709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:41.508 [2024-11-20 15:47:27.302871] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:35:41.508 [2024-11-20 15:47:27.303100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:41.508 [2024-11-20 15:47:27.303124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:35:41.508 [2024-11-20 15:47:27.303142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:35:41.508 [2024-11-20 15:47:27.303158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.073 [2024-11-20 15:47:27.754900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.073 [2024-11-20 15:47:27.754982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:35:42.073 [2024-11-20 15:47:27.755005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 449.526 ms 00:35:42.073 [2024-11-20 15:47:27.755019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.073 [2024-11-20 15:47:27.761540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.073 [2024-11-20 15:47:27.761803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:35:42.073 [2024-11-20 15:47:27.761832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:35:42.073 [2024-11-20 15:47:27.761847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.073 [2024-11-20 15:47:27.762206] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:35:42.073 [2024-11-20 15:47:27.762231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.073 [2024-11-20 15:47:27.762244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:35:42.073 [2024-11-20 15:47:27.762258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:35:42.073 [2024-11-20 15:47:27.762270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.073 [2024-11-20 15:47:27.762307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.073 [2024-11-20 15:47:27.762322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:35:42.073 [2024-11-20 15:47:27.762335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:42.073 [2024-11-20 15:47:27.762347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.073 [2024-11-20 15:47:27.762399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 459.530 ms, result 0 00:35:42.073 [2024-11-20 15:47:27.762452] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:35:42.073 [2024-11-20 15:47:27.762562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.073 [2024-11-20 15:47:27.762593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:35:42.073 [2024-11-20 15:47:27.762605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.111 ms 00:35:42.073 [2024-11-20 15:47:27.762617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.189076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.189153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:35:42.332 [2024-11-20 15:47:28.189173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 424.837 ms 00:35:42.332 [2024-11-20 15:47:28.189214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.195894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.196121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:35:42.332 [2024-11-20 15:47:28.196147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:35:42.332 [2024-11-20 15:47:28.196160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.196517] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:35:42.332 [2024-11-20 15:47:28.196542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.196554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:35:42.332 [2024-11-20 15:47:28.196590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:35:42.332 [2024-11-20 15:47:28.196604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.196643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.196657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:35:42.332 [2024-11-20 15:47:28.196669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:42.332 [2024-11-20 15:47:28.196681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.196731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 434.272 ms, result 0 00:35:42.332 [2024-11-20 15:47:28.196782] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:42.332 [2024-11-20 15:47:28.196797] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:42.332 [2024-11-20 15:47:28.196812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.196825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:35:42.332 [2024-11-20 15:47:28.196838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 893.965 ms 00:35:42.332 [2024-11-20 15:47:28.196850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.196890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.196904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:35:42.332 [2024-11-20 15:47:28.196921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:42.332 [2024-11-20 15:47:28.196933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.212571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:42.332 [2024-11-20 15:47:28.212792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.212809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:42.332 [2024-11-20 15:47:28.212825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.836 ms 00:35:42.332 [2024-11-20 15:47:28.212838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.213555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.213617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:35:42.332 [2024-11-20 15:47:28.213638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:35:42.332 [2024-11-20 15:47:28.213661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.216160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.216190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:35:42.332 [2024-11-20 15:47:28.216204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.473 ms 00:35:42.332 [2024-11-20 15:47:28.216216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.216268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.216281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:35:42.332 [2024-11-20 15:47:28.216295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:42.332 [2024-11-20 15:47:28.216312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.216473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.332 [2024-11-20 15:47:28.216495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:42.332 [2024-11-20 15:47:28.216508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:35:42.332 [2024-11-20 15:47:28.216520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.332 [2024-11-20 15:47:28.216553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.333 [2024-11-20 15:47:28.216566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:42.333 [2024-11-20 15:47:28.216599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:42.333 [2024-11-20 15:47:28.216611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.333 [2024-11-20 15:47:28.216658] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:42.333 [2024-11-20 15:47:28.216685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.333 [2024-11-20 15:47:28.216697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:42.333 [2024-11-20 15:47:28.216725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:35:42.333 [2024-11-20 15:47:28.216738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.333 [2024-11-20 15:47:28.216802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:42.333 [2024-11-20 15:47:28.216817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:42.333 [2024-11-20 15:47:28.216829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:35:42.333 [2024-11-20 15:47:28.216841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:42.333 [2024-11-20 15:47:28.218065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1359.720 ms, result 0 00:35:42.333 [2024-11-20 15:47:28.233376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.333 [2024-11-20 15:47:28.249429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:42.333 [2024-11-20 15:47:28.260098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:42.591 Validate MD5 checksum, iteration 1 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:42.591 15:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:42.591 [2024-11-20 15:47:28.436270] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:42.591 [2024-11-20 15:47:28.437081] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83930 ] 00:35:42.849 [2024-11-20 15:47:28.640046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.106 [2024-11-20 15:47:28.815190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.002  [2024-11-20T15:47:31.893Z] Copying: 498/1024 [MB] (498 MBps) [2024-11-20T15:47:31.893Z] Copying: 997/1024 [MB] (499 MBps) [2024-11-20T15:47:36.077Z] Copying: 1024/1024 [MB] (average 499 MBps) 00:35:50.119 00:35:50.119 15:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:35:50.119 15:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:52.018 Validate MD5 checksum, iteration 2 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ae168b6b6f59132c60e2f91c34e07b5 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ae168b6b6f59132c60e2f91c34e07b5 != \0\a\e\1\6\8\b\6\b\6\f\5\9\1\3\2\c\6\0\e\2\f\9\1\c\3\4\e\0\7\b\5 ]] 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:52.018 15:47:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:52.018 [2024-11-20 15:47:37.950987] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:52.018 [2024-11-20 15:47:37.951447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84031 ] 00:35:52.277 [2024-11-20 15:47:38.126512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.535 [2024-11-20 15:47:38.252276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.435  [2024-11-20T15:47:40.957Z] Copying: 500/1024 [MB] (500 MBps) [2024-11-20T15:47:42.353Z] Copying: 1024/1024 [MB] (average 544 MBps) 00:35:56.395 00:35:56.653 15:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:35:56.653 15:47:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=30271ada2f8017e5450b5d05381e6fed 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 30271ada2f8017e5450b5d05381e6fed != \3\0\2\7\1\a\d\a\2\f\8\0\1\7\e\5\4\5\0\b\5\d\0\5\3\8\1\e\6\f\e\d ]] 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83894 ]] 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83894 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83894 ']' 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83894 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83894 00:35:59.179 killing process with pid 83894 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83894' 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83894 00:35:59.179 15:47:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83894 00:36:00.114 [2024-11-20 15:47:46.066564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:00.374 [2024-11-20 15:47:46.088139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.088201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:00.374 [2024-11-20 15:47:46.088219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:00.374 [2024-11-20 15:47:46.088232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.088261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:00.374 [2024-11-20 15:47:46.092574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.092632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:00.374 [2024-11-20 15:47:46.092655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.290 ms 00:36:00.374 [2024-11-20 15:47:46.092666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.092906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.092921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:00.374 [2024-11-20 15:47:46.092934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:36:00.374 [2024-11-20 15:47:46.092944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.094199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.094245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:00.374 [2024-11-20 15:47:46.094263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.236 ms 00:36:00.374 [2024-11-20 15:47:46.094278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.095704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.095748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:00.374 [2024-11-20 15:47:46.095766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.377 ms 00:36:00.374 [2024-11-20 15:47:46.095792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.112886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.113117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:00.374 [2024-11-20 15:47:46.113144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.004 ms 00:36:00.374 [2024-11-20 15:47:46.113184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.121700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.121766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:00.374 [2024-11-20 15:47:46.121785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.464 ms 00:36:00.374 [2024-11-20 15:47:46.121820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.121973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.121992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:00.374 [2024-11-20 15:47:46.122008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:36:00.374 [2024-11-20 15:47:46.122023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.138472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.138559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:00.374 [2024-11-20 15:47:46.138626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.394 ms 00:36:00.374 [2024-11-20 15:47:46.138664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.154851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.154951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:00.374 [2024-11-20 15:47:46.154980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.112 ms 00:36:00.374 [2024-11-20 15:47:46.155001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.170914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.171205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:00.374 [2024-11-20 15:47:46.171250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.822 ms 00:36:00.374 [2024-11-20 15:47:46.171263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.188218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.188302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:00.374 [2024-11-20 15:47:46.188320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.817 ms 00:36:00.374 [2024-11-20 15:47:46.188330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.188397] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:00.374 [2024-11-20 15:47:46.188417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:00.374 [2024-11-20 15:47:46.188431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:00.374 [2024-11-20 15:47:46.188442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:00.374 [2024-11-20 15:47:46.188453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:00.374 [2024-11-20 15:47:46.188654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:00.374 [2024-11-20 15:47:46.188678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e141d19c-be18-465e-80a9-1bc6a5ba2fca 00:36:00.374 [2024-11-20 15:47:46.188690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:00.374 [2024-11-20 15:47:46.188702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:00.374 [2024-11-20 15:47:46.188713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:00.374 [2024-11-20 15:47:46.188725] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:00.374 [2024-11-20 15:47:46.188736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:00.374 [2024-11-20 15:47:46.188761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:00.374 [2024-11-20 15:47:46.188771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:00.374 [2024-11-20 15:47:46.188781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:00.374 [2024-11-20 15:47:46.188790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:00.374 [2024-11-20 15:47:46.188800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.188847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:00.374 [2024-11-20 15:47:46.188867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:36:00.374 [2024-11-20 15:47:46.188885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.210858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.210946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:00.374 [2024-11-20 15:47:46.210968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.896 ms 00:36:00.374 [2024-11-20 15:47:46.210983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.211529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.374 [2024-11-20 15:47:46.211549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:00.374 [2024-11-20 15:47:46.211565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.492 ms 00:36:00.374 [2024-11-20 15:47:46.211624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.287378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.374 [2024-11-20 15:47:46.287722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:00.374 [2024-11-20 15:47:46.287754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.374 [2024-11-20 15:47:46.287767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.374 [2024-11-20 15:47:46.287850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.374 [2024-11-20 15:47:46.287865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:00.374 [2024-11-20 15:47:46.287878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.374 [2024-11-20 15:47:46.287890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.375 [2024-11-20 15:47:46.288037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.375 [2024-11-20 15:47:46.288054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:00.375 [2024-11-20 15:47:46.288067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.375 [2024-11-20 15:47:46.288079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.375 [2024-11-20 15:47:46.288102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.375 [2024-11-20 15:47:46.288121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:00.375 [2024-11-20 15:47:46.288133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.375 [2024-11-20 15:47:46.288145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.434537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.434621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:00.641 [2024-11-20 15:47:46.434669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.434683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:00.641 [2024-11-20 15:47:46.558216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:00.641 [2024-11-20 15:47:46.558382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:00.641 [2024-11-20 15:47:46.558490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:00.641 [2024-11-20 15:47:46.558719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:00.641 [2024-11-20 15:47:46.558823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.558909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.558933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:00.641 [2024-11-20 15:47:46.558950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.641 [2024-11-20 15:47:46.558965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.641 [2024-11-20 15:47:46.559030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:00.641 [2024-11-20 15:47:46.559047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:00.642 [2024-11-20 15:47:46.559069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:00.642 [2024-11-20 15:47:46.559084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.642 [2024-11-20 15:47:46.559284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 471.079 ms, result 0 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:02.561 Remove shared memory files 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83671 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:02.561 ************************************ 00:36:02.561 END TEST ftl_upgrade_shutdown 00:36:02.561 ************************************ 00:36:02.561 00:36:02.561 real 1m39.916s 00:36:02.561 user 2m20.263s 00:36:02.561 sys 0m25.936s 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.561 15:47:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@14 -- # killprocess 76974 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 76974 ']' 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@958 -- # kill -0 76974 00:36:02.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76974) - No such process 00:36:02.561 Process with pid 76974 is not found 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76974 is not found' 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84169 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:02.561 15:47:48 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84169 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@835 -- # '[' -z 84169 ']' 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.561 15:47:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:02.561 [2024-11-20 15:47:48.403149] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:36:02.561 [2024-11-20 15:47:48.403346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84169 ] 00:36:02.820 [2024-11-20 15:47:48.603028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.820 [2024-11-20 15:47:48.766838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.771 15:47:49 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.771 15:47:49 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:03.771 15:47:49 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:04.337 nvme0n1 00:36:04.337 15:47:50 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:04.337 15:47:50 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:04.337 15:47:50 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:04.595 15:47:50 ftl -- ftl/common.sh@28 -- # stores=42be9348-94ea-42a5-9f2f-1e0b02b7f36f 00:36:04.595 15:47:50 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:04.595 15:47:50 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42be9348-94ea-42a5-9f2f-1e0b02b7f36f 00:36:04.853 15:47:50 ftl -- ftl/ftl.sh@23 -- # killprocess 84169 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 84169 ']' 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@958 -- # kill -0 84169 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@959 -- # uname 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84169 00:36:04.853 killing process with pid 84169 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84169' 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@973 -- # kill 84169 00:36:04.853 15:47:50 ftl -- common/autotest_common.sh@978 -- # wait 84169 00:36:07.474 15:47:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:07.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:07.732 Waiting for block devices as requested 00:36:07.732 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:07.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:07.990 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:08.248 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:13.506 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:13.506 15:47:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:13.506 Remove shared memory files 00:36:13.506 15:47:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:13.506 15:47:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:13.506 15:47:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:13.506 15:47:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:13.506 15:47:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:13.506 15:47:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:13.506 00:36:13.506 real 10m46.577s 00:36:13.506 user 13m28.393s 00:36:13.506 sys 1m35.040s 00:36:13.506 15:47:59 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.506 ************************************ 00:36:13.506 END TEST ftl 00:36:13.506 15:47:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:13.506 ************************************ 00:36:13.506 15:47:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:13.506 15:47:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:13.506 15:47:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:13.506 15:47:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:13.506 15:47:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:13.506 15:47:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:13.506 15:47:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:13.506 15:47:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:13.506 15:47:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:13.506 15:47:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:13.506 15:47:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.506 15:47:59 -- common/autotest_common.sh@10 -- # set +x 00:36:13.506 15:47:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:13.506 15:47:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:13.506 15:47:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:13.506 15:47:59 -- common/autotest_common.sh@10 -- # set +x 00:36:15.406 INFO: APP EXITING 00:36:15.406 INFO: killing all VMs 00:36:15.406 INFO: killing vhost app 00:36:15.406 INFO: EXIT DONE 00:36:15.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:16.230 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:16.230 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:16.230 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:16.230 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:16.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:17.054 Cleaning 00:36:17.054 Removing: /var/run/dpdk/spdk0/config 00:36:17.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:17.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:17.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:17.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:17.054 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:17.054 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:17.054 Removing: /var/run/dpdk/spdk0 00:36:17.054 Removing: /var/run/dpdk/spdk_pid57680 00:36:17.054 Removing: /var/run/dpdk/spdk_pid57920 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58159 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58264 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58326 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58459 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58483 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58693 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58810 00:36:17.054 Removing: /var/run/dpdk/spdk_pid58917 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59039 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59147 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59192 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59223 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59299 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59416 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59876 00:36:17.054 Removing: /var/run/dpdk/spdk_pid59957 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60036 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60052 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60211 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60227 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60397 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60419 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60488 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60512 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60576 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60599 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60811 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60842 00:36:17.054 Removing: /var/run/dpdk/spdk_pid60931 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61125 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61220 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61268 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61735 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61839 00:36:17.054 Removing: /var/run/dpdk/spdk_pid61953 00:36:17.054 Removing: /var/run/dpdk/spdk_pid62012 00:36:17.054 Removing: /var/run/dpdk/spdk_pid62037 00:36:17.054 Removing: /var/run/dpdk/spdk_pid62123 00:36:17.054 Removing: /var/run/dpdk/spdk_pid62764 00:36:17.054 Removing: /var/run/dpdk/spdk_pid62812 00:36:17.054 Removing: /var/run/dpdk/spdk_pid63332 00:36:17.054 Removing: /var/run/dpdk/spdk_pid63435 00:36:17.054 Removing: /var/run/dpdk/spdk_pid63551 00:36:17.054 Removing: /var/run/dpdk/spdk_pid63610 00:36:17.312 Removing: /var/run/dpdk/spdk_pid63635 00:36:17.312 Removing: /var/run/dpdk/spdk_pid63666 00:36:17.312 Removing: /var/run/dpdk/spdk_pid65569 00:36:17.312 Removing: /var/run/dpdk/spdk_pid65720 00:36:17.312 Removing: /var/run/dpdk/spdk_pid65724 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65742 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65784 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65793 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65806 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65850 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65854 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65866 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65916 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65920 00:36:17.313 Removing: /var/run/dpdk/spdk_pid65937 00:36:17.313 Removing: /var/run/dpdk/spdk_pid67352 00:36:17.313 Removing: /var/run/dpdk/spdk_pid67471 00:36:17.313 Removing: /var/run/dpdk/spdk_pid68901 00:36:17.313 Removing: /var/run/dpdk/spdk_pid70649 00:36:17.313 Removing: /var/run/dpdk/spdk_pid70740 00:36:17.313 Removing: /var/run/dpdk/spdk_pid70826 00:36:17.313 Removing: /var/run/dpdk/spdk_pid70936 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71039 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71139 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71220 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71301 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71415 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71508 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71615 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71696 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71776 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71886 00:36:17.313 Removing: /var/run/dpdk/spdk_pid71981 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72091 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72172 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72253 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72357 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72464 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72561 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72646 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72724 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72807 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72887 00:36:17.313 Removing: /var/run/dpdk/spdk_pid72996 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73087 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73187 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73268 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73348 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73429 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73503 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73612 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73709 00:36:17.313 Removing: /var/run/dpdk/spdk_pid73858 00:36:17.313 Removing: /var/run/dpdk/spdk_pid74153 00:36:17.313 Removing: /var/run/dpdk/spdk_pid74191 00:36:17.313 Removing: /var/run/dpdk/spdk_pid74654 00:36:17.313 Removing: /var/run/dpdk/spdk_pid74837 00:36:17.313 Removing: /var/run/dpdk/spdk_pid74937 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75057 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75114 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75139 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75441 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75507 00:36:17.313 Removing: /var/run/dpdk/spdk_pid75599 00:36:17.313 Removing: /var/run/dpdk/spdk_pid76020 00:36:17.313 Removing: /var/run/dpdk/spdk_pid76166 00:36:17.313 Removing: /var/run/dpdk/spdk_pid76974 00:36:17.313 Removing: /var/run/dpdk/spdk_pid77123 00:36:17.313 Removing: /var/run/dpdk/spdk_pid77327 00:36:17.313 Removing: /var/run/dpdk/spdk_pid77434 00:36:17.313 Removing: /var/run/dpdk/spdk_pid77776 00:36:17.313 Removing: /var/run/dpdk/spdk_pid78036 00:36:17.313 Removing: /var/run/dpdk/spdk_pid78386 00:36:17.313 Removing: /var/run/dpdk/spdk_pid78580 00:36:17.313 Removing: /var/run/dpdk/spdk_pid78699 00:36:17.572 Removing: /var/run/dpdk/spdk_pid78777 00:36:17.572 Removing: /var/run/dpdk/spdk_pid78898 00:36:17.572 Removing: /var/run/dpdk/spdk_pid78929 00:36:17.572 Removing: /var/run/dpdk/spdk_pid78997 00:36:17.572 Removing: /var/run/dpdk/spdk_pid79192 00:36:17.572 Removing: /var/run/dpdk/spdk_pid79431 00:36:17.572 Removing: /var/run/dpdk/spdk_pid79789 00:36:17.572 Removing: /var/run/dpdk/spdk_pid80159 00:36:17.572 Removing: /var/run/dpdk/spdk_pid80525 00:36:17.572 Removing: /var/run/dpdk/spdk_pid80948 00:36:17.572 Removing: /var/run/dpdk/spdk_pid81101 00:36:17.572 Removing: /var/run/dpdk/spdk_pid81195 00:36:17.572 Removing: /var/run/dpdk/spdk_pid81772 00:36:17.572 Removing: /var/run/dpdk/spdk_pid81854 00:36:17.572 Removing: /var/run/dpdk/spdk_pid82241 00:36:17.572 Removing: /var/run/dpdk/spdk_pid82598 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83047 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83186 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83251 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83325 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83383 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83455 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83671 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83747 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83815 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83894 00:36:17.572 Removing: /var/run/dpdk/spdk_pid83930 00:36:17.572 Removing: /var/run/dpdk/spdk_pid84031 00:36:17.572 Removing: /var/run/dpdk/spdk_pid84169 00:36:17.572 Clean 00:36:17.572 15:48:03 -- common/autotest_common.sh@1453 -- # return 0 00:36:17.572 15:48:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:17.572 15:48:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.572 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:36:17.572 15:48:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:17.572 15:48:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.572 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:36:17.572 15:48:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:17.572 15:48:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:17.572 15:48:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:17.572 15:48:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:17.572 15:48:03 -- spdk/autotest.sh@398 -- # hostname 00:36:17.572 15:48:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:17.831 geninfo: WARNING: invalid characters removed from testname! 00:36:49.942 15:48:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:49.942 15:48:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:52.472 15:48:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:54.998 15:48:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:56.899 15:48:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:59.430 15:48:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:01.960 15:48:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:01.960 15:48:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:01.960 15:48:47 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:01.960 15:48:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:01.960 15:48:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:01.960 15:48:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:01.960 + [[ -n 5307 ]] 00:37:01.960 + sudo kill 5307 00:37:02.004 [Pipeline] } 00:37:02.017 [Pipeline] // timeout 00:37:02.024 [Pipeline] } 00:37:02.037 [Pipeline] // stage 00:37:02.042 [Pipeline] } 00:37:02.053 [Pipeline] // catchError 00:37:02.061 [Pipeline] stage 00:37:02.062 [Pipeline] { (Stop VM) 00:37:02.071 [Pipeline] sh 00:37:02.349 + vagrant halt 00:37:06.606 ==> default: Halting domain... 00:37:13.224 [Pipeline] sh 00:37:13.502 + vagrant destroy -f 00:37:17.686 ==> default: Removing domain... 00:37:18.264 [Pipeline] sh 00:37:18.544 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:37:18.580 [Pipeline] } 00:37:18.596 [Pipeline] // stage 00:37:18.602 [Pipeline] } 00:37:18.616 [Pipeline] // dir 00:37:18.621 [Pipeline] } 00:37:18.637 [Pipeline] // wrap 00:37:18.643 [Pipeline] } 00:37:18.657 [Pipeline] // catchError 00:37:18.667 [Pipeline] stage 00:37:18.669 [Pipeline] { (Epilogue) 00:37:18.682 [Pipeline] sh 00:37:18.962 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:27.083 [Pipeline] catchError 00:37:27.085 [Pipeline] { 00:37:27.099 [Pipeline] sh 00:37:27.379 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:27.637 Artifacts sizes are good 00:37:27.645 [Pipeline] } 00:37:27.660 [Pipeline] // catchError 00:37:27.671 [Pipeline] archiveArtifacts 00:37:27.679 Archiving artifacts 00:37:27.775 [Pipeline] cleanWs 00:37:27.787 [WS-CLEANUP] Deleting project workspace... 00:37:27.787 [WS-CLEANUP] Deferred wipeout is used... 00:37:27.793 [WS-CLEANUP] done 00:37:27.795 [Pipeline] } 00:37:27.810 [Pipeline] // stage 00:37:27.825 [Pipeline] } 00:37:27.838 [Pipeline] // node 00:37:27.843 [Pipeline] End of Pipeline 00:37:27.884 Finished: SUCCESS